Everything you need to know about xAI’s all-new Grok 1.5
Grok-1.5 will come with the capability of processing the contexts of up to 128K tokens within its window, a sixteen-fold increase in memory capacity of the LLM.
- Tech News
- 3 min read

Grok 1.5 explained: In a tweet on Friday, billionaire Elon Musk announced that xAI's latest Grok-1.5 chatbot will be accessible to users from April first week. Musk said he was confident that the second version of the xAI’s generative AI chatbot will surpass current AI standards.
xAI further confirmed the upcoming release of Grok-1.5, an upgraded iteration of its chatbot Grok, in a statement issued on Thursday. Early testers and existing Grok users on X, can now expect access to the enhanced version in the coming days.
Two weeks back, xAI made Grok’s first AI model open source after Elon Musk sued OpenAI and its co-founder Sam Altman for breaching the initial contract of remaining open-source. Now that Musk is ready to release its updated GenAI tool for all X premium users, we are sharing a detailed overview of the features and updates that the new Grok will come packed with.
Better reasoning abilities
xAI claims that Grok-1.5 will see major improvements, particularly in coding and mathematical tasks. According to internal assessments conducted by xAI, Grok-1.5 achieved scores of 50.6 per cent on the MATH benchmark and 90 per cent on the GSM8K benchmark. These benchmarks encompass various mathematical challenges from grade school to high school levels.
Advertisement
Comparison of Grok1.5 with other GenAI models | Image credit: xAI
Advertisement
To give you a context, OpenAI’s GPT 4 scored only 2 per cent higher on the test, Google’s Gemini was able to score more than 58 per cent, while the Anthropic’s Claude 3 Opus scored more than 61 per cent on the test.
Apart from that, Grok-1.5 achieved a score of 74.1 per cent on the HumanEval benchmark, which assesses its proficiency in code generation and problem-solving.
Better contextual understanding with higher token processing
Grok-1.5 brings with it, the capability of processing the contexts of up to 128K tokens within its window. This will be a sixteen-fold increase in memory capacity compared to Grok’s previous versions and will enable the AI chatbot to extract insights from much longer documents.
Tokens, serving as the fundamental units of input, enable the AI models to comprehend and process textual data in natural language processing tasks such as text generation and classification. Each token corresponds to a distinct linguistic unit, allowing the model to sequentially analyse and generate text.
Image credit: xAI
The image shows a graph that visualises the model's ability to recall information from its context window. The X-axis is the length of the context window and the Y-axis is the relative position of the fact to retrieve from the window.
With this update, Grok-1.5 will be able to handle longer and more intricate prompts while preserving its proficiency in following instructions, even as its context window expands.
In evaluations such as the Needle In A Haystack (NIAH) assessment, Grok-1.5 showcased remarkable retrieval capabilities, achieving flawless results in retrieving embedded text within contexts spanning up to 128K tokens in length.
Advanced infrastructure for LLM research
Cutting-edge research in Large Language Models (LLMs) requires an efficient and adaptable infrastructure. Grok-1.5 is underpinned by a bespoke distributed training framework leveraging JAX, Rust, and Kubernetes.
The sophisticated training stack allows the developers to effectively prototype concepts and train novel architectures at scale with minimal complexity. One of the primary challenges in training LLMs on expansive GPU clusters is ensuring the reliability and continuous operation of the training process.
To address this, the xAI custom training orchestrator is adept at automatically identifying and removing problematic nodes from the training job, thereby maximising uptime and efficiency.