See also: Machine learning terms, Neural network, Deep learning
TensorFlow Playground (also called the Neural Network Playground or, more recently, just Playground) is an interactive, browser-based visualization that lets users build and train small neural networks on toy datasets without writing code. It is hosted at playground.tensorflow.org and was created by Daniel Smilkov and Shan Carter of the Google Brain and Big Picture teams in 2016. The source code is open source under the Apache 2.0 license and lives in the tensorflow/playground repository on GitHub.
Despite its name, TensorFlow Playground does not run TensorFlow under the hood. The site is a self-contained TypeScript program that implements forward and backward propagation in JavaScript and renders the network with D3.js. The page itself states: "This is not an official Google product." The tool is widely used in classroom teaching, including Google's Machine Learning Crash Course, and is one of the most cited examples of interactive machine learning education on the web.
The Playground presents a single web page divided into four panels. From left to right these are: a data panel showing the training dataset on a two-dimensional grid, a feature panel listing the input features that feed into the network, a network panel showing the hidden layers and their neurons, and an output panel that visualizes the network's decision boundary in real time as training proceeds. Above these panels, a control bar lets the user choose the learning rate, activation function, regularization, regularization rate, problem type (classification or regression), and dataset noise level. A play button drives one epoch of mini-batch gradient descent per step, with training loss and test loss plotted at the top right corner of the output panel.
The combination is deliberately small. Inputs live in two dimensions, datasets contain at most a few hundred points, and the network supports up to six hidden layers of up to eight neurons each. The constraint is pedagogical: a small enough problem fits on a single screen and trains in seconds, so users can adjust a hyperparameter, click play, and see the effect almost immediately. The original paper that introduced the Playground describes the goal as letting users "experiment via direct manipulation rather than coding, enabling them to quickly build an intuition about neural nets" (Smilkov et al., 2017).
TensorFlow Playground was launched in 2016, alongside the broader public push around TensorFlow that followed the framework's open-source release in November 2015. The two principal authors are Daniel Smilkov and Shan Carter, both then at Google. The tool was developed in collaboration with members of the Google Brain team and the Big Picture group, and its design and feedback credits explicitly include D. Sculley, Fernanda Viégas, and Martin Wattenberg. The Playground page also credits two earlier projects as direct inspirations: Andrej Karpathy's ConvNetJS demos, which trained convolutional and ordinary neural networks entirely in the browser, and Christopher Olah's essays on neural networks, which used interactive diagrams to explain backpropagation, manifolds, and topology.
A short paper describing the project, titled Direct-Manipulation Visualization of Deep Networks, was presented at the ICML 2016 Workshop on Visualization for Deep Learning. The authors of that paper are Daniel Smilkov, Shan Carter, D. Sculley, Fernanda B. Viégas, and Martin Wattenberg. A revised version was later posted to arXiv in August 2017 (arXiv:1708.03788). The paper argues that interactive visualization can give beginners an intuition about training dynamics that is hard to develop from textbook descriptions alone.
The project was open-sourced on GitHub at github.com/tensorflow/playground. The repository has received over 12,000 GitHub stars and continues to be maintained as a teaching resource. In Google's own educational properties the tool was eventually renamed simply "Playground," though the original name and URL are still in widespread use.
The Playground exposes six built-in two-dimensional toy datasets, four for classification and two for regression. They are generated programmatically each time the user clicks the regenerate button, with adjustable noise.
| Dataset | Type | Description |
|---|---|---|
| Two Gaussians | Classification | Two clusters of points drawn from Gaussian distributions, one labeled positive and one negative. Linearly separable when noise is low. |
| Circle | Classification | Positive points sampled inside a disk, negative points sampled in an outer ring. Not linearly separable in raw (x, y), but trivially separable using x squared plus y squared. |
| XOR | Classification | Points labeled by the sign of the product of their coordinates, recovering the classic XOR pattern. Cannot be solved by a single neuron. |
| Spiral | Classification | Two interlocking Archimedean spirals. The hardest built-in dataset; usually requires several hidden layers and careful tuning to fit cleanly. |
| Plane | Regression | Targets are a linear function of x and y, clipped within a bounded radius. |
| Multi-Gaussian | Regression | Targets are a mixture of Gaussian peaks arranged in a grid pattern. |
A train/test ratio slider controls how much of the generated data is held out for evaluation. The displayed loss curves use mean squared error for regression and a similar squared loss for classification.
Users can toggle which input features the network actually receives. The seven feature options expose classical feature engineering tricks alongside the raw inputs.
| Feature | Symbol | Notes |
|---|---|---|
| x | X1 | Raw horizontal coordinate. |
| y | X2 | Raw vertical coordinate. |
| x squared | X1 squared | Captures circular boundaries when combined with y squared. |
| y squared | X2 squared | Captures circular boundaries when combined with x squared. |
| x times y | X1 X2 | Recovers the XOR pattern with a single linear classifier. |
| sin(x) | sin X1 | Periodic basis function in the horizontal direction. |
| sin(y) | sin X2 | Periodic basis function in the vertical direction. |
The state code in the repository (src/state.ts) also exposes cosine variants for x and y, though these are not surfaced as default toggles in the live UI. By turning features on and off, students can directly compare the classical machine learning approach (engineer good features, fit a shallow model) with the deep learning approach (use the raw inputs, add more hidden layers).
The network panel exposes plus and minus controls for the number of hidden layers (from zero to six) and the number of neurons per layer (from one to eight). Each neuron is rendered as a small square that shows what region of input space activates it most strongly, and the connecting lines between neurons are rendered with thickness and color proportional to the learned weight (blue for positive, orange for negative). Hovering over a neuron magnifies its activation pattern in the output panel, which makes it easy to see how a deeper network composes simple half-plane detectors into curved or piecewise decision boundaries.
Four activation functions are available, defined in the Activations class in src/nn.ts:
| Activation | Formula | Typical use in Playground |
|---|---|---|
| ReLU | max(0, z) | Default modern choice. Good for deeper networks. Can produce sharp, piecewise-linear decision boundaries. |
| Tanh | (e^z minus e^-z) / (e^z plus e^-z) | Smooth saturating activation. Often the best behaved on the spiral dataset for shallow networks. |
| Sigmoid | 1 / (1 plus e^-z) | Saturates at 0 and 1. Useful for showing vanishing gradient effects in deeper networks. |
| Linear | z | No nonlinearity; the network reduces to a linear model regardless of depth. |
The regularizer dropdown offers None, L1, and L2. The implementation in src/nn.ts defines an RegularizationFunction class with both penalty and derivative methods for each, so the chosen regularizer is added directly to the gradient during backpropagation. The strength of the penalty is controlled by a regularization rate slider whose values run from 0 to 10 on a logarithmic scale.
The learning rate slider also runs on a logarithmic scale, from 0.00001 up to 10. Training uses mini-batch gradient descent with a batch size selectable up to 30. There is no momentum, no Adam, and no learning-rate schedule; the deliberate simplicity is part of the teaching design.
The problem type switch toggles between classification and regression. In classification mode the output panel renders the predicted class probability as a smooth gradient between blue and orange. In regression mode it renders the continuous predicted value with a similar color scheme. A discretize-output checkbox snaps the boundary to a hard line, which is useful when discussing the difference between probabilistic and thresholded predictions.
A show-test-data checkbox overlays held-out points on top of the decision surface, so students can see when the model is fitting the training set well but generalizing poorly to test data. The training and test loss curves are plotted in the top-right corner during training and update every epoch.
TensorFlow Playground is written in TypeScript, with HTML and CSS for layout and presentation. The visualization layer uses D3.js for the dataset scatter plot, the decision-boundary heatmap, the per-neuron activation thumbnails, and the loss curves. The build pipeline uses npm and the TypeScript compiler, producing a static bundle that ships from a CDN with no server-side computation.
The key implementation point is that the neural network library at the heart of the Playground is custom, written from scratch in TypeScript inside src/nn.ts. It is not a binding to TensorFlow, TensorFlow.js, or any other framework. The file defines:
Node class representing a single neuron, holding its activation, accumulated input, error signal, and biasLink class representing a weighted connection between two nodesbuildNetwork helper that wires up an arbitrary stack of fully connected layersforwardProp and backProp functions that implement the standard feed-forward and backpropagation algorithmsupdateWeights function that applies the gradient with the chosen learning rate, regularization rate, and regularizerThe Activations class provides the four activation functions and their analytic derivatives. The RegularizationFunction class provides the L1 and L2 penalty functions and derivatives. Errors are computed with a simple squared error.
Dataset generation lives in src/dataset.ts and contains pure-function generators: classifyTwoGaussData, classifyCircleData, classifyXORData, classifySpiralData, regressPlane, and regressGaussian. Each function takes a sample count and a noise level and returns an array of two-dimensional examples with labels. Application state, including which features and hyperparameters are selected, is serialized into the URL hash by src/state.ts, which is what allows users to share specific configurations with a single link. This URL-based state is the mechanism that makes the Playground a useful teaching artifact: an instructor can set up an interesting configuration, copy the URL, and email it to a class.
The Playground has become a standard teaching artifact in introductory machine learning curricula. Google integrated it into its own Machine Learning Crash Course, where several lessons rely on it for hands-on exercises in feature crosses, neural networks, and regularization. The exercises in that course explicitly walk students through Playground configurations, asking them to predict what will happen and then click play to verify or correct their intuition.
University courses use the Playground in similar ways. It appears in introductory deep learning lectures at multiple universities and in many massive open online courses on neural networks. Outside of formal courses, it has been featured in popular tutorial posts, including the 2016 Google Cloud blog post by Kaz Sato titled Understanding Neural Networks with TensorFlow Playground, which uses a sequence of Playground configurations to walk through single-neuron classification, hidden layers, and the double-spiral problem.
The AI4K12 initiative, which develops AI curriculum guidelines for kindergarten through twelfth grade in the United States, lists the Playground as a recommended classroom demonstration tool because it lets students see the relationship between data, model capacity, and decision boundary without any programming prerequisites.
The value of the Playground for teaching comes from making several abstract ideas concrete:
A recurring pedagogical observation, made explicit in the Smilkov et al. paper, is that direct manipulation lowers the activation energy required to test a hypothesis. Reading about overfitting in a textbook does not commit the idea to memory; doubling the layers in the Playground, watching the test loss climb while the training loss falls, and then turning on L2 regularization to fix it usually does.
The Playground is part of a small lineage of in-browser interactive machine learning visualizations.
| Tool | Author(s) | Year | Notes |
|---|---|---|---|
| ConvNetJS | Andrej Karpathy | 2014 | JavaScript library that trains convolutional and fully connected networks in the browser; cited as a direct inspiration for the Playground. |
| Olah's neural network essays | Christopher Olah | 2014 onward | Essays such as Neural Networks, Manifolds, and Topology used interactive diagrams to explain how networks transform data. |
| TensorFlow Playground | Daniel Smilkov, Shan Carter | 2016 | Subject of this article. |
| Distill.pub | Shan Carter, Christopher Olah, and others | 2017 | Online journal of interactive machine learning articles, founded in part by the same Google researchers behind the Playground. |
| TensorFlow.js demos | 2018 onward | Browser-based ML library that supports a much wider class of in-browser experiments and powers many later interactive demos. | |
| Embedding Projector | Daniel Smilkov, Nikhil Thorat, and others | 2016 | Interactive 3D visualization of high-dimensional embeddings, also from the Google Big Picture team. |
The Playground's user interface conventions, especially using line thickness and color for weights and a heatmap for the decision boundary, have been reused and adapted in many later interactive ML articles, dashboards, and teaching tools. Several reimplementations exist, including a Python port (TFPlaygroundPSA) used for studying parameter-space behavior and forks that swap in different optimizers or datasets.
The Playground is intentionally narrow, and its limitations are part of its design rather than oversights, but it is worth listing them when teaching with it.
These constraints keep the tool fast and approachable but mean that the Playground is best used as a first encounter with neural networks rather than a serious experimental workbench.
Imagine a digital chalkboard where blue dots and orange dots are sprinkled around. Your job is to draw a squiggly line that puts all the blue dots on one side and all the orange dots on the other. TensorFlow Playground is a website where the computer draws that line for you, and you get to decide how it learns. You pick how many helpers (called neurons) it has, you pick how many groups of helpers there are, and you pick how big its steps are when it tries new ideas. Then you press play and watch the line wiggle around until it gets the dots right. If you give it too many helpers, it tries too hard and remembers each dot instead of learning the pattern, which is what people mean by overfitting. If you give it too few, it cannot fit the shape at all. Playing with these dials for a few minutes teaches you more about neural networks than reading three chapters of a textbook.