This overview describes the state of the platform on October 1, 2019.
Branches in benchmark project:
- Public Docker image, without project template
- Custom Docker image with remote debug
- Initial version of project template
- The branch with the actual version of the project template is yet to be done.
Developer experience - 3
The platform team aims to provide best-in-class developer experience. Neuro is very flexible by its nature and allows running various open-source and commercial tools (for example, Jupyter Notebooks, TensorBoard, nni for hyper-parameter tuning, etc) and follow best practices (like debugging your models remotely).
The platform provides a powerful documented CLI accompanied with a browser dashboard, which is capable of the most important operation every ML engineer has to be able to do: terminating the environment for the sake of saving money. Additionally, it has an official Python API.
Also the platform provides various entry-level materials: tutorials, FAQs, and a powerful project template that hides most of the CLI boilerplate under the hood.
ML environments extensibility - 3
The platform uses Docker images to provide ML environments. You can use public images (like jupyter/base-notebook) or your own ones: the platform provides an ability to upload and manage images. Moreover, you can install additional software in a running Docker image and save it as a new image snapshot, which allows further customization and reuse of the environments.
This means that bringing your frameworks, and integrating open source and commercial ML tools needs almost the same efforts as doing all this on your local or remote machine.
Luckily, you don’t have to understand Docker to be able to train your model on Neuro Platform, as it provides a base environment, which is used implicitly if you start your project from the platform project template.
Data Ingestion - 3
The platform provides the “storage” abstraction which is pretty similar to the remote server file system: you can upload and download datasets, code, and training artifacts alike, browse the storage, rename, move, and delete files. From the training perspective the data is accessed the same as on a local machine.
AI starter kits - 3
The platform provides several AI starter kits (a.k.a. “ML Recipes”). They are organized as GitHub projects and based on the project template. You can clone and start any of them in several short CLI commands. They cover several vision and nlp problems. Also, two Fast.AI courses are available.
Collaboration - 3
The platform allows easy and secure sharing of data, environments, and work sessions (jobs) with any member of your team. Non-engineers can also benefit from the ability to get access to the training results and demo sessions.
Bring your own cloud - 2
The platform is designed to be deployed on top of your own cloud, being it AWS or GCP. The platform team can set up your neuro cluster in less than an hour. Azure and Oracle Cloud are not supported yet.
Enterprise-ready - 1
The platform provides unified access to your resources. Once you log into neuro, you can access your data, Docker images, and job instances, without additional access keys and login information.
Other enterprise-readiness indicating features (like audit logs, role-based security, and reporting) are not yet supported on the platform.