T-NLG stands for Turing Natural Language Generation. It’s an AI that predicts the natural flow of human language. The name Turing comes fromAlan Turing, a British mathematician famous for code-breaking during World War II. Turing developed the Turing machine, the first concept of a computer, and the Turing test, a way to assess a computer’s ability to respond like a human.
Microsoft honored that early work when it created the Project Turing Team. This team developed large-scale natural language models to solve problems for businesses using Microsoft apps and services. Some of this research has been used inpopular Microsoft Office apps like Word.

While you need a powerful GPU to train AI models,a well-built, low-cost Chromebookorinexpensive Android phonecan make use of these advances since many run in the cloud.
What is a natural language model?
Humans have developed thousands of spoken languages, but most people speak only a few. We speak in phonemes (the sounds made by your voice) and write with letters to form words and sentences. Those concepts are foreign to computers.
Computers rely on transistors to process input. The only acceptable values are ones and zeroes. This binary code is the basis for machine languages, like the letters in our words.

Computers are artificial, and humans are natural. That’s why an artificial intelligence (AI) designed to process human language is called a natural language model (NLM). If an NLM is sufficiently large, it’s called a large language model (LLM).
An AI system running an LLM can create a summary of a document or answer questions about the contents with results similar to a human’s. An NLM makes a computer respond naturally and perform tasks that normally require a human.
What is T-NLG?
T-NLG is a large language model created by Microsoft as part of its AI research and development program. It outperformed many similar models when it was introduced in 2020, including OpenAI’s GPT and GPT-2, Google’s BERT, and Nvidia’s MegatronLM in natural language processing (NLP) tests.
T-NLG was once the largest LLM with 17 billion parameters, all fine details that affect the output an AI generates. Typically, larger models are better since they contain more data to fine-tune the results. That isn’t always true. Recent research shows that small LLMs created with millions of parameters cansometimes outperformthose with hundreds of billions of parameters.
How does T-NLG work?
T-NLG, like other AIs, uses a Transformer-based generative model. Google developed the Transformer concept in 2017. It’s a deep-learning model that predicts what word should come next in a sentence, gleaning context from a sequence of words.
Like theswipe keyboard on your Android phone and Gmail’s predictive textwhen composing an email, the transformers used in LLMs like T-NLG help the AI generate text using this prediction technique. Gboard now has a proofreading feature.
Microsoft used 256 Nvidia V100 GPUs, breaking a 17 billion parameter model into several parts running parallel. This powerful hardware, with several software refinements, trained T-NLG with an impressive understanding of natural language and the ability to predict, analyze, and generate text.
Is Microsoft still using T-NLG?
Microsoft researches new and more powerful AI implementations in its Project Turing research program. It’s unclear if Microsoft still uses the 17 billion parameter model it developed in 2020.
The company uses Turing models for SmartFind in Microsoft Word and Question Matching in Xbox. Microsoft employs generative text in language understanding, question answering, text prediction, and summarization across many of its apps and services.
Several new NLP models have been described by the Microsoft Turing Team since 2020, including Megatron-Turing NLG 530B, a massive LLM with 530 billion parameters, the Turing Universal Language Representation model (T-ULRv5 in 2021 and T-ULRv6 in 2022), multilingual LLMs, Turing Image Super Resolution, an image enhancing AI, and Turing Bletchley, a multilingual image recognition model.
Microsoft AI and OpenAI
Along with Microsoft’s internal work on LLMs, the company invested $1 billion in OpenAI’s research in 2019. The partnership grew in 2021 as Microsoft added more funds and became the exclusive provider of cloud services for OpenAI.
In February 2023, this collaboration reached a new level when Microsoft incorporated OpenAI’s GPT-4 technology into the Bing search engine. It added Bing Chat as a tab and provided generative text in a box at the side of every Bing search. Bing Chat is also available as an app on Android phones, an alternative to Google Assistant andBard, Google’s generative AI model.
Microsoft also uses OpenAI’s Dall-E AI image generator for Bing Create. Over time, OpenAI technology is being integrated with other Microsoft apps and services.
That doesn’t mean Microsoft has stopped working on AI. The Microsoft Turing Team is still active and busy researching new AI models and their uses.
More AI models
OpenAI’s ChatGPT is well-known as the first LLM to reach widespread public use.Google’s Bard competes well with ChatGPT. Google also recentlyupgraded Bard with its Gemini model, which accepts multimodal input.
This is just the beginning, and more AI advances are expected in 2024 and beyond. Generative AI and multimodal LLMs will continue to improve and enhance technology for the foreseeable future as leader manufacturers embrace this opportunity to create more useful and user-friendly products.