Adobe develops SlimLM that can process documents locally on devices without Internet connectivity


Adobe researchers have published a paper detailing a new artificial intelligence (AI) model capable of processing documents locally on a device. Published last week, the paper highlights that researchers experimented with existing large language models (LLMs) and small language models (SLMs) to find out how AI models’ processing capacity and inference speed How to reduce its size while keeping it high. As a result of the experiments, researchers were able to develop an AI model called SlimLM that can work entirely within smartphones and process documents.

Adobe researchers have developed SlimLM

AI-powered document processing, which allows a chatbot to answer a user’s questions about its content, is an important use case of generative AI. Many companies, including Adobe, have used this application and released tools that provide this functionality. However, there is one problem with all such tools – AI processing happens on the cloud. On-server processing of data raises concerns about data privacy and makes processing documents containing sensitive information a risky process.

The risk primarily emerges from the fear that the company offering the solution may train AI on it, or that sensitive information may be leaked due to a data breach incident. As a solution, Adobe researchers published a paper in the online journal arXiv, detailing a new AI model that can perform document processing entirely on device.

The smallest version of the AI ​​model, called SlimLM, has just 125 million parameters making it possible to integrate within a smartphone’s operating system. Researchers claim that it can work locally, without the need for internet connectivity. As a result, users can process even the most sensitive documents without any fear as the data never leaves the device.

In the paper, the researchers highlighted that they conducted several experiments on a Samsung Galaxy S24 to find a balance between parameter size, inference speed, and processing speed. After optimizing it, the team pre-tanned the model on the SlimPajama-627B foundation model and fine-tuned it using DocAssist, a specialized software for document processing.

Notably, arXiv is a pre-print journal where peer reviews are not required for publication. In such a situation, the validity of the claims made in the research paper cannot be ascertained. However, if this is true, AI models could be shipped with Adobe’s platform in the future.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *