Robotics/AI

IF IT TASTES LIKE BEEF IT’S BEEF:

AI Translation Triumphs Over Human Translators in Korean Literary Contest (Park Jin-seong, 2026.02.02, Chosun Daily)

Recently, the Literature Translation Institute of Korea under the Ministry of Culture, Sports and Tourism conducted a blind test involving 16 domestic English literature professors. The test compared an English version translated by a professional translator and one translated by ChatGPT for the Joseon-era poet Jang Yu’s poem “Be Cautious When Alone (Shindokjam),” which is set to be exported to English-speaking regions. Without revealing which translation was done by whom, the professors were shown the original Korean text and the two translations and asked which was better. The results showed that 12 professors chose the ChatGPT translation, two selected the human translation, and two declared “undecidable.”

NO ONE WILL MISS MANAGEMENT:

A.I. Won’t Eliminate Managers, But It Will Redefine Leadership (Dominic Ashley-Timms • 01/02/26, The Observer)


For more than a century, the prevailing management model has been one of command-and-control. Managers were expected to be the nexus of knowledge, the primary problem-solvers and the arbiters of work. Promotion into management was typically a reward for attaining technical proficiency in a particular area, creating a legion of what the Chartered Management Institute (CMI) has called “accidental managers”—individuals promoted for their knowledge but utterly unprepared for the human complexities of leadership. In the U.K. alone, the CMI estimates that 82 percent of managers receive no formal preparation or training to take on the people management aspects of their role.

This is the category of manager that A.I. is coming for. The manager whose primary value lies in holding information, creating reports, assigning tasks and resolving routine problems is standing on a trapdoor. Generative A.I. and advanced analytics can now perform these functions with unprecedented speed and efficiency. Knowledge is no longer power because knowledge is ubiquitous. A recent MIT Sloan study found that access to A.I. tools increased productivity for knowledge workers by over 40 percent, largely by automating the synthesis and retrieval of information—the very tasks that once consumed a manager’s day.

Information wants to be free.

GREEN ENERGY VS RED TAPE:

These Companies Want To Use AI To Make Cheaper and Cleaner Energy—If the Government Lets Them (Jeff Luse, 12.29.2025, reason)

While reducing paperwork may seem like a trivial fix, it’s an important one; a reactor license application can easily exceed 10,000 pages and undergo up to two years of review from federal regulators. And simple errors in these documents can set projects back and cost thousands of dollars. For one of Everstar’s clients, fixing an error in the licensing documentation, which CEO Kevin Kong tells Reason was “essentially a typo,” required “developing and getting approval for a formal License Amendment Request.” This request cost the developer “tens of thousands of dollars in engineering time and external consultants” and added months in regulatory review, according to Kong.

Gordian, the company’s AI-enabled platform, aims to eliminate cases like these by “automat[ing] compliance, technical documentation, and regulatory navigation for the nuclear industry,” says Kong. Since launching earlier this year, the technology has yielded impressive results. After Last Energy was given federal funding in August to demonstrate its advanced nuclear reactor, it partnered with Everstar to write a 50-page environmental assessment. What would normally take eight weeks was completed in one. The system was also able to turn around a 200-page ecology report—a revision that usually takes a few weeks—in one night.

Kong says his clients have been able to cut “30-40% of the time spent on major regulatory deliverables,” which can be the “difference between projects penciling out or not.” The company plans to scale up operations in the coming year.

IT’LL NEVER FLY, NOAM…:


For the First Time, AI Analyzes Language as Well as a Human Expert (Steve Nadfis, 12/14/25, Wired)

For some in the linguistic community, language models not only don’t have reasoning abilities, they can’t. This view was summed up by Noam Chomsky, a prominent linguist, and two coauthors in 2023, when they wrote in The New York Times that “the correct explanations of language are complicated and cannot be learned just by marinating in big data.” AI models may be adept at using language, these researchers argued, but they’re not capable of analyzing language in a sophisticated way.

Image may contain Book Indoors Library Publication Adult Person Furniture Bookcase Face and Head
Gašper Beguš, a linguist at the University of California, Berkeley. Photograph: Jami Smith
That view was challenged in a recent paper by Gašper Beguš, a linguist at the University of California, Berkeley; Maksymilian Dąbkowski, who recently received his doctorate in linguistics at Berkeley; and Ryan Rhodes of Rutgers University. The researchers put a number of large language models, or LLMs, through a gamut of linguistic tests—including, in one case, having the LLM generalize the rules of a made-up language. While most of the LLMs failed to parse linguistic rules in the way that humans are able to, one had impressive abilities that greatly exceeded expectations. It was able to analyze language in much the same way a graduate student in linguistics would—diagramming sentences, resolving multiple ambiguous meanings, and making use of complicated linguistic features such as recursion. This finding, Beguš said, “challenges our understanding of what AI can do.”

Chomsky is a synonym for “wrong” in every language.

YEAH, BUT IT TOOK A COUPLE YEARS…:

World’s largest polymer 3D printer helps speed construction of nuclear reactors parts (Georgina Jedikovska, Dec 05, 2025, Interesting Engineering)


US scientists have introduced a groundbreaking approach to building nuclear reactor components faster than ever before, using one of the world’s largest 3D printers.

The researchers at the University of Maine’s (UMaine) Advanced Structures and Composites Center (ASCC) utilized the super-sized polymer 3D printer to design enormous, precision-shaped concrete form liners.

IT’LL NEVER FLY, ORVILLE:

As the 2025 Atlantic hurricane season ends, the future of forecasting is AI (Greg Allen, 11/29/25, NPR: Weekend Edition)

A week before the hurricane made landfall, however, forecast models disagreed on where it would go. One model that got it right — accurately predicting Melissa’s path and its category 5 intensity — was a new one: Google’s DeepMind AI-based hurricane model.

James Franklin, a former branch chief at the National Hurricane Center, analyzed how the forecast models performed this year, and says Google’s DeepMind outshone them all. “The model performed very, very well, which was very impressive,” he says. “It was the best guidance we saw this year.”

THERE’LL BE TIME ENOUGH FOR COUNTING:

What’s next for AlphaFold: A conversation with a Google DeepMind Nobel laureate: “I’ll be shocked if we don’t see more and more LLM impact on science,” says John Jumper (Will Douglas Heaven, November 24, 2025, MIT Technology Review)

Proteins are made from strings of amino acids that chemical forces twist up into complex knots. An untwisted string gives few clues about the structure it will form. In theory, most proteins could take on an astronomical number of possible shapes. The task is to predict the correct one.

Jumper and his team built AlphaFold 2 using a type of neural network called a transformer, the same technology that underpins large language models. Transformers are very good at paying attention to specific parts of a larger puzzle.

But Jumper puts a lot of the success down to making a prototype model that they could test quickly. “We got a system that would give wrong answers at incredible speed,” he says. “That made it easy to start becoming very adventurous with the ideas you try.”


They stuffed the neural network with as much information about protein structures as they could, such as how proteins across certain species have evolved similar shapes. And it worked even better than they expected. “We were sure we had made a breakthrough,” says Jumper. “We were sure that this was an incredible advance in ideas.”

What he hadn’t foreseen was that researchers would download his software and start using it straight away for so many different things. Normally, it’s the thing a few iterations down the line that has the real impact, once the kinks have been ironed out, he says: “I’ve been shocked at how responsibly scientists have used it, in terms of interpreting it, and using it in practice about as much as it should be trusted in my view, neither too much nor too little.” […]

AlphaFold was designed to be used for a range of purposes. Now multiple startups and university labs are building on its success to develop a new wave of tools more tailored to drug discovery. This year, a collaboration between MIT researchers and the AI drug company Recursion produced a model called Boltz-2, which predicts not only the structure of proteins but also how well potential drug molecules will bind to their target.

Last month, the startup Genesis Molecular AI released another structure prediction model called Pearl, which the firm claims is more accurate than AlphaFold 3 for certain queries that are important for drug development. Pearl is interactive, so that drug developers can feed any additional data they may have to the model to guide its predictions.

AlphaFold was a major leap, but there’s more to do, says Evan Feinberg, Genesis Molecular AI’s CEO: “We’re still fundamentally innovating, just with a better starting point than before.”

NOT JUST SHOWER CURTAIN RINGS?:

AI Is Suddenly Surprisingly Good At Physics (Sabine Hossenfelder, Nov 16, 2025)

LLMs aren’t able to actually use logic or reasoning to reach thought-out conclusions. Despite that, several startups plan on using the current systems to do serious physics research. And some physicists, including myself, have used AI chatbots like ChatGPT and Claude to write papers. The situation is changing incredibly fast. Let’s take a look at how LLMs might be improving at physics, and the current state of AI scientists.