Annotation tool

Here in Deep Systems we face the problem of creating good train datasets on a daily basis. As the result, we have developed our own web-tool for image annotation.

Computer vision


About project


To develop tool for fast and efficient image annotation.


  • Web-client was build with AngularJS
  • Backend software was developed to store and process datasets
  • Tools for import and export was created

go to

Data is a key

Today, the most successful Deep Learning projects in business - are those that build with Supervised Learning approaches. The best solutions for recognition, image segmentation, understanding of speech, machine translation from one language to another, etc — they all have one thing in common: the existence of immense training dataset.

For example, pedestrian detection system training set consists of a large number of these pairs:

  • Image with pedestrians
  • Coordinates of framing rectangles around all the pedestrians in the picture

The same for machine translation:

  • Sentence in Russian (source) language
  • Sentence in English (target) language

Along with today's takeoff of AI there is another strong trend: the majority of the IT industry giants (Google, Facebook, IBM, etc.) publish articles and even share the source code of Deep Learning solutions — and often these solutions are very important to those companies. The thing that companies do not publish — is a training dataset used to build those great products.

Pic 1. Image annotation idea illustration

So, today, the training datasets are the main asset of the companies working in the area of AI. Industry giants are in a privileged position, since it is their Internet services generate terabytes of data, and some part can be used to build Deep Learning models.

Also, tools to automate and speed up the collection of training datasets are not being published in the open source.

Lets focus on the specific problems: semantic scene segmentation and detection of objects in images. Large companies, of course, have an internal tool for creating training datasets to solve these problems. For the rest of the market, there are the following alternatives to solve those problems:

  • To use open datasets (note: sometimes the license does not allow to use these datasets for commercial purposes)
  • To use open tools like LabelMe
  • To develop own tool

Pic 2. Example of our annotation tool usage
Pic 3. Example of our annotation tool usage

Open training datasets are very limited. There is no easy and open tools to conveniently annotate images. Developing and maintaining own markup tools can be a hard and costly process, especially for small companies.

Here in Deep Systems we face the problem of creating train datasets every day. As the result, we have developer out own web-tool of image annotation.

When such a tool can be useful for you?

Here are a few scenarios where our tool can be useful for your business:

  • Creating a dataset from scratch
  • Updating dataset with objects
  • Combining multiple training datasets in one
  • Help with training dataset creation

Perhaps, LabelMe - is the only alternative, but it's functionality is severely limited:

Our tool Label Me
Export and import
Mturk support
Pixel-level markup
Handling big images
Filtration of classes and objects
Support of thousands object at the same time
Object types ✓ Rectangle
✓ Polygon
✓ Dot
✓ Line
✓ Skeleton
✓ Rectangle
✓ Polygon
Updates Every week No updates since Jul 2016
Easy to deploy Node.js Apache, SSI, perl/CGI, PHP
Table 1. Label Me and our tool comparison


Although Label Me tool has a long history, this tool was developed in 2008. When we were developing our solution, we tried to take into account the experience from the start.

To draw objects in browser we use Canvas and library Fabric JS. It allows us to work with hundreds and even thousands of objects at the same time. Providing complex interaction capabilities with objects right in the browser, such as Panning, Rotate, Zoom and etc. gives us accurate per-pixel layout — must-have for a number of Machine Learning tasks.

On backend side we use Node.js and MongoDB — well proven in other projects.

Docker allows everyone to deploy and run the application on any operating system in minutes.

Next project

Road scene recognition