Skip to main content

MIT startup Liquid AI releases its first series of generative AI models

Liquid AI, an AI startup spun out from MIT, has announced its first series of generative AI models, which it refers to as Liquid Foundation Models (LFMs).

“Our mission is to create best-in-class, intelligent, and efficient systems at every scale – systems designed to process large amounts of sequential multimodal data, to enable advanced reasoning, and to achieve reliable decision-making,” Liquid explained in a post

According to Liquid, LFMs are “large neural networks built with computational units deeply rooted in the theory of dynamical systems, signal processing, and numerical linear algebra.” By comparison, LLMs are based on a transformer architecture, and by not using that architecture, LFMs are able to have a much smaller memory footprint than LLMs. 

“This is particularly true for long inputs, where the KV cache in transformer-based LLMs grows linearly with sequence length. By efficiently compressing inputs, LFMs can process longer sequences on the same hardware,” Liquid wrote. 

Liquid’s models are general-purpose and can be used to model any type of sequential data, like video, audio, text, time series, and signals. 

According to the company, LFMs are good at general and expert knowledge, mathematics and logical reasoning, and efficient and effective long-context tasks.

The areas where they fall short today include zero-shot code tasks, precise numerical calculations, time-sensitive information, human preference optimization techniques, and “counting the r’s in the word ‘strawberry,’ ” the company said. 

Currently, their main language is English, but they also have secondary multilingual capabilities in Spanish, French, German, Chinese, Arabic, Japanese, and Korean. 

The first series of LFMs include three models:

  • 1.3B model designed for resource-constrained environments
  • 3.1B model ideal for edge deployments
  • 40.3B Mixture of Experts (MoE) model optimal for more complex tasks

Liquid says it will be taking an open-science approach with its research, and will openly publish its findings and methods to help advance the AI field, but will not be open-sourcing the models themselves.

“This allows us to continue building on our progress and maintain our edge in the competitive AI landscape,” Liquid wrote. 

According to Liquid, it is working to optimize its models for NVIDIA, AMD, Qualcomm, Cerebra, and Apple hardware.

Interested users can try out the LFMs now on Liquid Playground, Lambda (Chat UI and API), and Perplexity Labs. The company is also working to make them available on Cerebras Interface as well. 

The post MIT startup Liquid AI releases its first series of generative AI models appeared first on SD Times.



from SD Times https://ift.tt/X0gCHO9

Comments

Popular posts from this blog

Difference between Web Designer and Web Developer Neeraj Mishra The Crazy Programmer

Have you ever wondered about the distinctions between web developers’ and web designers’ duties and obligations? You’re not alone! Many people have trouble distinguishing between these two. Although they collaborate to publish new websites on the internet, web developers and web designers play very different roles. To put these job possibilities into perspective, consider the construction of a house. To create a vision for the house, including the visual components, the space planning and layout, the materials, and the overall appearance and sense of the space, you need an architect. That said, to translate an idea into a building, you need construction professionals to take those architectural drawings and put them into practice. Image Source In a similar vein, web development and design work together to create websites. Let’s examine the major responsibilities and distinctions between web developers and web designers. Let’s get going, shall we? What Does a Web Designer Do?

A guide to data integration tools

CData Software is a leader in data access and connectivity solutions. It specializes in the development of data drivers and data access technologies for real-time access to online or on-premise applications, databases and web APIs. The company is focused on bringing data connectivity capabilities natively into tools organizations already use. It also features ETL/ELT solutions, enterprise connectors, and data visualization. Matillion ’s data transformation software empowers customers to extract data from a wide number of sources, load it into their chosen cloud data warehouse (CDW) and transform that data from its siloed source state, into analytics-ready insights – prepared for advanced analytics, machine learning, and artificial intelligence use cases. Only Matillion is purpose-built for Snowflake, Amazon Redshift, Google BigQuery, and Microsoft Azure, enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Trusted by companies of all sizes to meet

2022: The year of hybrid work

Remote work was once considered a luxury to many, but in 2020, it became a necessity for a large portion of the workforce, as the scary and unknown COVID-19 virus sickened and even took the lives of so many people around the world.  Some workers were able to thrive in a remote setting, while others felt isolated and struggled to keep up a balance between their work and home lives. Last year saw the availability of life-saving vaccines, so companies were able to start having the conversation about what to do next. Should they keep everyone remote? Should they go back to working in the office full time? Or should they do something in between? Enter hybrid work, which offers a mix of the two. A Fall 2021 study conducted by Google revealed that over 75% of survey respondents expect hybrid work to become a standard practice within their organization within the next three years.  Thus, two years after the world abruptly shifted to widespread adoption of remote work, we are declaring 20