Microsoft has announced the general availability of Azure Machine Learning, a service that accelerates the rate at which data scientists build and deploy machine-learning models.
The cloud-based service will automatically identify suitable machine-learning models for analysing the given dataset and suggest how those models should be configured via their hyperparameter setup.
The service is not designed to replace data scientists, rather to make them more efficient, cutting their workload by narrowing the choices available and then letting them decide which of the suggested models and which configuration options work best for the given data.
"It really boosts the productivity of data scientists," Bharat Sandhu, Microsoft's director of machine learning, advanced analytics and AI told TechRepublic.
"It is a complementary thing. It gives them a benchmark, it allows them to test multiple models and gives them the accuracy of those models. They'll see all the models that were run, they'll see the features and the accuracy.
"That's where data scientists come in, saying 'OK, great, thank-you, I have a good benchmark to work from and now I can make it much more accurate'."
SEE: IT leader's guide to the future of artificial intelligence (Tech Pro Research)
Azure Machine Learning supports popular open-source frameworks such as PyTorch, TensorFlow and scikit-learn, as well as DevOps capabilities for streamlining the management of underlying infrastructure, enabling experiment tracking and management of models deployed in the cloud and on the edge of the network.
The offering also integrates with other services on the Azure cloud platform, with a complementary DataPrep SDK for preparing data for machine-learning training and a simplified training process where users don't have to manage the underlying CPU and GPU clusters.
One Microsoft customer using Azure Machine Learning is TAL, a 150-year-old Australian life insurance company, which is using the service to improve the "customer experience".
"Azure Machine Learning regularly lets TAL's data scientists deploy models within hours rather than weeks or months - delivering faster outcomes and the opportunity to roll out many more models than was previously possible," said Gregor Pacnik, innovation delivery manager with TAL.
Data scientists can access Azure Machine Learning's capabilities in various ways, including locally from a data scientist's workstation or laptop. The service's Python SDK is accessible from any Python environment, IDEs like Visual Studio Code (VS Code) or PyCharm, or Notebooks such as Jupyter and Azure Databricks.
For those interested in how Azure Machine Learning works, the service asks the data scientist for the dataset and whether they want to carry out regression, classification or forecasting. Then automated machine learning is used to propose a machine-learning model by doing feature engineering, selecting the algorithm and sweeping hyperparameters. The service can also carry out hyperparameter tuning of existing models. The service's machine learning pipeline allows data scientists to split the process into discrete steps such as data movement, data transforms, feature extraction, training and evaluation. A run history feature captures each training run, the model performance, and related metrics.
Azure supports all Python-based, machine-learning software frameworks and to simplify interoperability between these frameworks Microsoft has worked with major tech firms, including Facebook and Amazon Web Services (AWS), as well as hardware companies, to develop the Open Neural Network Exchange (ONNX) specification for describing machine learning models in an open standard format.
Azure Machine Learning service supports ONNX and enables users to deploy, manage, and monitor ONNX models. Microsoft and its partners also announced they would open source the ONNX runtime environment from today.
The general availability of Azure Machine Learning was announced at the Microsoft Connect() event today.
Among the other machine-learning announcements was the extension of Azure's Cognitive Services Containers preview, which allows Azure's on-demand machine-learning services to run locally on devices at the edge of the network.
This preview service has now been extended so that Azure Language Understanding can work in containers, which Sandhu says opens up the possibility of devices at the edge of a network to respond to spoken commands without sending those commands back to the Azure cloud, as well as making a wider range of Azure Cognitive Services suited to running on devices at the edge.
Custom translations for Azure Cognitive Services are also now generally available, enabling their integration into existing applications, workflows and websites.
The event was also used to reveal a variety of news about the wider Azure cloud platform, including the general availability of Azure Stream Analytics on IoT Edge, making it easier to move analytics between the edge and the cloud, Azure Boards integration with GitHub Issues, and the Azure Pipelines extension for the VS Code IDE.
Microsoft and Docker also announced Cloud Native Application Bundle (CNAB) - an open source, cloud-agnostic specification for packaging and running distributed applications.
CNAB is designed to work with everything from Azure to the Docker platform to on-prem OpenStack and Kubernetes. Microsoft and Docker are both providing open-source tools to get customers started on CNAB.
- Machine learning: A cheat sheet (TechRepublic)
- Artificial intelligence: A business leader's guide (free TechRepublic PDF)
- IT leader's guide to deep learning (Tech Pro Research)
- What is AI? Everything you need to know about Artificial Intelligence (ZDNet)
- 6 ways to delete yourself from the internet (CNET)
- Artificial Intelligence: More must-read coverage (TechRepublic Flipboard magazine)
- Cybersecurity, AI skills to dominate IT staff hires in 2019 (ZDNet)
- IBM Watson: A cheat sheet (TechRepublic)
- These are the jobs AI is destroying and creating (ZDNet)
- How IBM wants to use Watson AI to reduce bias in hiring (TechRepublic)
Nick Heath is chief reporter for TechRepublic. He writes about the technology that IT decision makers need to know about, and the latest happenings in the European tech scene.