Apple has released eight new small LLMs as part of CoreNet, which is the company’s library for training deep neural networks.
The models, called OpenELM (Open-source Efficient Language Models), come in eight different options: four are pre trained models and four are instruction tuned and each comes in sizes of 270M, 250M, 1.1B, and 3B parameters.
Because of the smaller model size, the models should be able to run directly on devices instead of having to connect back to a server to do calculations.
According to Apple, the goal of OpenELM is to “empower and enrich the open research community by providing access to state-of-the-art language models.”
The models are currently only available on Hugging Face and the source code was made available by Apple.
“The reproducibility and transparency of large language models are crucial for advancing open research, ensuring the trustworthiness of results, and enabling investigations into data and model biases, as well as potential risks. To this end, we release OpenELM, a state-of-the-art open language model … This comprehensive release aims to empower and strengthen the open research community, paving the way for future open research endeavors,” the Apple researchers wrote in a paper.
The post Apple releases eight new open LLMs appeared first on SD Times.
from SD Times https://ift.tt/N04Co3m
Comments
Post a Comment