Main image of article JavaScript and AI: What You Need to Know

If you’re a JavaScript (or TypeScript) programmer and you’re interested in upping your AI skills, don’t feel like you have to leave your favorite programming language behind. JavaScript has its own set of tools that work with AI.

The key to working effectively with JavaScript in the context of AI is understanding the limitations of the language, as well as how it runs on a server or backend compared to running in a browser. Let’s explore!

Limitations in the Browser Versus Back-End

JavaScript can run directly on a server or back end using node.js, which in turn uses the JavaScript engine built by Google known as V8. Technically, V8 recognizes two languages: JavaScript and an assembly-like language called Web Assembly (WAsm for short). Like many modern languages such as Python, Java, and C#, JavaScript runs within the confines of a “runtime.” But with today’s computers, that’s not prohibitive. (If you’re a TypeScript programmer, remember that TypeScript gets converted to JavaScript before running in the browser.)

With AI, generally much of the work needs to be offloaded to a GPU, which typically has at least a thousand cores and is much better suited to the computation intensity needed for AI work.

Today’s browsers can access the GPU, but with limitations. Most browsers currently support a standard called WebGL, which allows the browser to use the GPU for high-performance graphics. WebGL can also use the GPU for some matrix computations, which is important in AI. But this doesn’t come close to covering what’s usually needed for AI.

Presently, there’s a standard being built called WebGPU that will open up much more of the GPU to the JavaScript code running in the browser. This means if you’re a JavaScript programmer writing AI code, it’s imperative that you study these technologies:

  • WebGL: This is the older GPU standard for browsers, and while limited, it does have some use in AI. You don’t need to learn WebGL programming, but you’ll want to study what it can (and cannot) do so you’ll understand the limitations. You can learn about it here.
  • WebGPU: This is the new standard, not yet implemented in many browsers. Chrome supports an early version of it. Like with WebGL, you don’t need to be an expert in it, but you’ll want to learn about its limitations compared to how Python, for example, can offload code directly to a GPU. WebGPU is a big advancement, but still causes some limitations in how much AI JavaScript can do in the browser. You can read more about it here.
  • Browser limits: Browsers implement memory limits, and as such might struggle to load a large language model (LLM). This means you’ll need to look into smaller LLMs if you want them to be used in a browser app.
  • Front end - back end coordination: A big alternative to running the AI code in the browser is to have it running on the back end; the browser app interacts with the backend through APIs. This means learning how to coordinate two separate apps (the front end and back end) through the use of APIs.

Although WebGL and WebGPU are limited compared to non-browser languages and platforms, if you have a good GPU (such as an Nvidia card) and the latest features of WebGPU, there’s a lot you can still do with JavaScript and AI.

Learning AI and Models

In order for AI systems to understand language, they need to make use of a technology called Natural Language Processing (NLP). This is by no means a new field; researchers have studied it for decades. However, in 2017, a research team at Google announced their new technology called a transformer, which was groundbreaking in the field of AI. This led to a new way of processing and generating language, which in turn created new pathways to generative AI, which is the ability for AI tools to generate text that sounds like it was written by a human.

The main tool behind this new type of AI is called a Large Language Model (LLM). LLMs are AI models that can be “pre-trained” by inputting massive amounts of documents into them (such as all of Wikipedia). These LLMs then learn language structure and meaning, allowing them to effectively generate human language.

That, of course, is just the beginning of how today’s AI works. How much of it should you learn? You’ll want to at least understand the basics of how LLMs interact with your apps. From there, it depends on how far you want to go with your AI knowledge. The majority of people writing AI apps in the years to come will be writing apps that make use of pre-trained LLM models. If you’re in that group, you’ll want to make sure you can:

  • Understand what AI models are, and the details of what service they provide (but not necessarily how they work).
  • Be able to select an appropriate model for your application.
  • Use existing libraries to work with an AI model in your app.

You might take your knowledge further and learn how LLMs work; however, this is an entire field of study in itself. But if it’s something that interests you, then by all means explore it.

Libraries for Running AI in the Browser

Once you understand the limitations of running AI in the browser and are comfortable with them, here are some libraries you’ll want to learn.

TensorFlow.js: This is a JavaScript version of a popular library normally used in Python. Look at its main page here and spend time becoming familiar with the demos; if you have the latest Chrome, they should run just fine. Take time to understand exactly what they’re doing, as each demo uses a different type of model, including facial recognition and what they call a “teachable model.” Then look at the source code to learn how each works, especially how they interact with the model and what parameters they use.

Right now, TensorFlow.js is the most popular and probably most important JavaScript AI to learn, so this is the one you’ll want to start with. And if you’re ready to take things to the next level, check out the TensorFlow Python library, as that’s the flagship library.

(Note: One library you might run into through Google is a library called Keras.js. This library has been deprecated and is no longer supported, and so we don’t recommend learning it.)

ML5.js: This is a library written on top of TensorFlow.js, meant to simplify doing machine learning inside the browser. Note that it does not run in node.js, only running in the browser. One of the points the ML5 developers make is that it’s intended for creative type people, as they put it “artists, creative coders, and students.” But even if that doesn’t totally describe you, you’ll still want to spend time learning it, especially if you’re new to AI.

Transformers.js: Hugging Face, which is currently the primary portal for hosting LLMs, has created several AI libraries. Their primary one, called Transformers, allows you to interact with thousands of pretrained models. Although it was originally built for Python, Hugging Face now has a version for JavaScript called Transformers.js, and it’s built to be used in the browser. (It uses WebGPU, and so you’ll be limited on which browsers can run it.)

Other Browser Libraries

Developers are always creating new libraries and sharing them on GitHub with an open-source license. Some of these libraries take off and grow huge communities; others don’t. With some, the developers continue development over years; other times the developers abandon them, sometimes with no apparent reason.

During our research, we came across a library that’s smaller and doesn’t yet have a big community behind it, but it’s thoroughly intriguing and deserves a look. (You can get a feel for the size of the community by checking how many people have starred a project on GitHub. Presently this one has under 200 stars, which isn’t huge, but does indicate it’s growing in popularity.) It’s simply called LLM.js and you can find it here.

Another very small one we found is called WebML. The name is intriguing, and it has some really great demos you’ll want to play with. This one has an even smaller community than LLM.js, but you still might want to study it and see if it fits your needs.

AI Tools for Backend and Server-based JavaScript

We already mentioned TensorFlow.js for the front end; if you’re doing backend or server-based JavaScript through node.js, you can still use TensorFlow.js.

We found various pages online claiming that TensorFlow.js is strictly browser-based. That’s not actually correct, according to this page in the official documentation. TensorFlow.js can choose the technology it needs to do the AI. If it recognizes it’s running in the browser, it will use WebGL. If it recognizes itself as a standalone desktop app, it will use the power of the full TensorFlow system and coordinate directly with an NVidia GPU via what’s called CUDA.

Pro Tip: CUDA is the framework through which you can deploy compiled C code to an Nvidia GPU. You don’t need to learn how to write CUDA code to do the type of AI coding we’re talking about here, but it’s worth at least reading a bit about it so you know what it is and where it fits into the AI infrastructure. Here’s a good starting point.

As with the browser, TensorFlow.js is currently the most important JavaScript library to learn for AI.

Transformers.js: Hugging Face originally built this library with the intention of it running in the browser. However, it can also be used outside the browser through node.js. Here’s an example in the official documentation showing you how to use it.

Conclusion

As you learn the AI tools in JavaScript, you’ll likely want to explore both the browser-based and the server-based variations. But don’t forget that you can also do a hybrid approach; if you need more performance than you can get in a browser but want to build a web application, you can always do the AI in the back-end. The backend doesn’t have to be written in JavaScript either; you could use Python, for example, which has a rich set of AI tools available.