TensorFlow, the machine learning model company, recently released a blog post laying out the ideas for the future of the organization.
According to TensorFlow, the ultimate goal is to provide users with the best machine learning platform possible as well as transform machine learning from a niche craft into a mature industry.
In order to accomplish this, the company said they will listen to user needs, anticipate new industry trends, iterate APIs, and work to make it easier for customers to innovate at scale.
To facilitate this growth, TensorFlow intends on focusing on four pillars: make it fast and scalable, utilize applied ML, have it be ready to deploy, and keep simplicity.
TensorFlow stated that it will be focusing on XLA compilation with the intention of making model training and inference workflows faster on GPUs and CPUs. Additionally, the company said that it will be investing in DTensor, a new API for large-scale model parallelism.
The new API allows users to develop models as if they were training on a single device, even when utilizing several different clients.
TensorFlow also intends to invest in algorithmic performance optimization techniques such as mixed-precision and reduced-precision computation in order to accelerate GPUs and TPUs.
According to the company, new tools for CV and NLP are also a part of its roadmap. These tools will come as a result of the heightened support for the KerasCV and KerasNLP packages which offer modular and composable components for applied CV and NLP use cases.
Next, TensorFlow stated that it will be adding more developer resources such as code examples, guides, and documentation for popular and emerging applied ML use cases in order to reduce the barrier of entry of machine learning.
The company also intends to simplify the process of exporting to mobile (Android or iOS), edge (microcontrollers), server backends, or JavaScript as well as develop a public TF2 C++ API for native server-side inference as part of a C++ application.
TensorFlow also stated that the process for deploying models developed using JAX with TensorFlow Serving and to mobile and the web with TensorFlow Lite and TensorFlow.js will be made easier.
Lastly, the company is working to consolidate and simplify APIs as well as minimize the time-to-solution for developing any applied ML system by focusing more on debugging capabilities.
A preview of these new TensorFlow capabilities can be expected in Q2 2023 with the production version coming later in the year. To follow the progress, see the blog and YouTube channel.
The post TensorFlow announces its roadmap for the future with focus on speed and scalability appeared first on SD Times.
from SD Times https://ift.tt/TW7xEZr
Comments
Post a Comment