What you need to know
- Microsoft recently published a new blog post highlighting its efforts toward teaching small language models how to reason.
- It unveiled Orca 2, a small language model demonstrating strong reasoning abilities by imitating the step-by-step reasoning traces of more capable LLMs like ChatGPT and Copilot.
- According to benchmarks, Orca 2 sports advanced performance capabilities compared to other LLMS when put to the test to handle complex tasks.
- Microsoft intends to train smaller language models using LLMs, ultimately expanding their capabilities.
There’s no doubt that Microsoft has placed all its bets on generative AI, especially after making a multi-billion dollar investment in the tech, further extending its partnership with OpenAI.
Speaking of OpenAI, we’ve witnessed what most might refer to as a paradigm shift affecting the top management of the tech firm. OpenAI’s board of directors stripped Sam Altman of his position, citing a lack of confidence in his leadership skills. Shortly after, Altman was offered a job at Microsoft leading the Advanced AI team alongside Greg Brockman (former OpenAI co-founder who resigned shortly after Altman’s ousting).
As all these unfold, Microsoft has published a new blog post highlighting its efforts toward teaching small language models how to reason. A few months ago, the company debuted Orca. “A 13-billion language model demonstrating strong reasoning abilities by imitating the step-by-step reasoning traces of more capable LLMs.”
And now, it has unveiled Orca 2 (which comes in two sizes – 7 billion and 13 billion parameters) as part of its efforts to tap into the capabilities of smaller LMs. According to Microsoft, Orca 2 sports “improved training signals and methods can empower smaller language models to achieve enhanced reasoning abilities.” This is a significant feat, considering these capabilities are often found on larger language models, including ChatGPT and Copilot.
Admittedly, both chatbots have faced numerous setbacks this year, with several users citing that ChatGPT is getting dumber amid claims that OpenAI is on the verge of bankruptcy. On the other hand, a report cited that Bing’s user base has stagnated for three months consecutively, despite Microsoft’s hefty investment in the tech.
Microsoft further cites that Orca 2 stacks miles ahead of other similar models, even the original Orca model. Moreover, the company indicated that it sports advanced performance levels compared to other larger models when handling complex tasks that test “advanced reasoning abilities in zero-shot settings.”
Microsoft’s Advanced AI team is oncourse
Amid the OpenAI fiasco over the weekend, Microsoft CEO Satya Nadella, announced that Sam Altman would be joining the company as lead of the Advanced AI team. A job that suits his capabilities and skill set.
Following the unfortunate Altman news, more than 500 OpenAI staffers penned a letter to the board of directors requesting his reinstatement, citing that the decision undermined their vision. The employees indicated they’d leave the company if their demands weren’t met, citing that “there’s no OpenAI without its people.”
According to sources familiar with the situation, Microsoft is ready to absorb all OpenAI employees into the AI division should they decide to make good on their promise and leave the company.
Microsoft will likely leverage OpenAI’s team to achieve more with Orca 2. Consequently, this will allow the company to use LLMs to train smaller language models, ultimately expanding the capabilities of smaller language models.
Do you think Microsoft is on track with its Orca 2 ventures? Share your thoughts with us in the comments.