A research agenda for assessing the economic impacts of code generation models
A research agenda for assessing the economic impacts of code generation models Read More »
Auto Added by WPeMatico
We built a neural theorem prover for Lean that learned to solve a variety of challenging high-school olympiad problems, including problems from the AMC12 and AIME competitions, as well as two problems adapted from the IMO.
Solving (some) formal math olympiad problems Read More »
We’ve fine-tuned GPT-3 to more accurately answer open-ended questions using a text-based web browser.
WebGPT: Improving the factual accuracy of language models through web browsing Read More »
We’ve trained a system that solves grade school math problems with nearly twice the accuracy of a fine-tuned GPT-3 model. It solves about 90% as many problems as real kids: a small sample of 9-12 year olds scored 60% on a test from our dataset, while our system scored 55% on those same problems.
Solving math word problems Read More »
We’re releasing Triton 1.0, an open-source Python-like programming language which enables researchers with no CUDA experience to write highly efficient GPU code—most of the time on par with what an expert would be able to produce.
Introducing Triton: Open-source GPU programming for neural networks Read More »
We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the “zero-shot” capabilities of GPT-2 and GPT-3.
CLIP: Connecting text and images Read More »