Stock Market

META’s new AI Evaluator will transform large language model training, here’s how

Just two months ago, OpenAI’s CEO Sam Altman was heard saying that training AI on synthetic data(i.e. data generated by AI) could lead to unreliable AI models.

META’s most recent AI evaluator, which was announced at the same time, is already out to prove Altman wrong.

Meta’s Self-Taught AI Evaluator is designed to help the development of large language models by allowing them to self-evaluate and self-improve without human intervention.

Currently, LLMs improvement requires an inefficient process in which skilled humans check answers for accuracy, which considerably elevates its adoption times and costs.

Moreover, the process requires human generated data, something that is only available in finite quantity.

META’s new model can now generate data that can be used to train other AI models, which means one problem is already solved.

It also means that humans won’t be needed to oversee the quality of the data being fed to AI models. The AI evaluator will take care of that too.

Key features of the Self-Taught Evaluator

The new model has two main features that could transform the AI industry.

First, it uses an autonomous learning feature to generate tasks and assess their performance, eliminating the need for skilled humans to verify the data used and answers given to solve the task at hand.

This reduces time and costs associated with model development and improvement, bringing scalability, a key requirement by corporations that need to use the models across multiple platforms and for different users’ needs.

Additionally, this reduced human participation also reduces the potential bias introduced into the model by humans.

Second, a “Chain of Thought” reasoning technique is used to emulate human reasoning by providing a series of intermediate steps toward solving complex tasks, instead of providing a straight answer.

For this, the model may learn from previous successful reasoning used to solve similar problems.

Why is this a big deal?

Being able to generate synthetic data to train and evaluate other AI models is a big deal for companies that want to use AI in customer support, employee training or legal analysis.

For example, used in a customer support chatbot, the model can break down the issue into a series of smaller steps to verify the possible causes of the problem and guide the customer into finding a solution.

In another scenario, the model can deconstruct a company’s internal rules and procedures to improve the training program for new employees.

Corporations can quickly adjust these models to their own needs without first building their own model, something they have had to do till now.

Risks of implementation

The implementation of such AI systems has risks and challenges that, if not considered, could result in more sizable problems in the future.

The quality of the seed model will always define its effectiveness and how much it can be trusted when used in real-life applications: if the model is flawed, then the answer may be flawed, too.

On the other hand, a lack of human oversight could lead to false information assumed as reliable inputs that will produce wrong or suboptimal answers. The model could also give an accurate answer after using the wrong logic.

While the model seems to be working well in theory, only time will tell if it can reliably perform the tasks we expect it to.

Considering the pace of AI development, we may not have to wait very long.

The post META’s new AI Evaluator will transform large language model training, here’s how appeared first on Invezz

admin

You may also like