About 4,000 volunteers gathered at the Orange County Convention Center on Martin Luther King Jr. Day to pack meals for those affected by the recent wildfires in California as well as for those in need across Central Florida.
MacKenzie and his colleagues have sent this quiet forest just outside Birmingham into the future – in a manner of speaking. They have pumped carbon dioxide (CO2) around the mature oak trees here in order to simulate the atmosphere that is expected to swathe planet Earth by the year 2050.
MacKenzie and his colleagues have been running their experiment for seven years to date, and the results have surprised them. Contrary to some previous analyses, their study suggests that trees can actually absorb more carbon as they age. It's a finding that highlights the immense importance of mature, temperate forests in terms of climate regulation.
What's more, for the first time, MacKenzie and his fellow forest-watchers have also shown that microscopic organisms living on these trees capture methane, another greenhouse gas harmful to the atmosphere. "[We] found the trees are providing another unexpected service for us," says MacKenzie. "The canopy hosts microbes, and these microbes eat the methane. There are lots of reasons to nurture forests.
The findings from Staffordshire indicate trees take up around 25 to 50 million tonnes of atmospheric methane each year, making them 7-12% better for climate than they are currently credited for.
https://www.bbc.com/future/article/20250120-experimental-forests-reveal-the-ways-trees-help-cool-the-climate
The researchers shared sample historical questions with TechCrunch that LLMs got wrong. For example, GPT-4 Turbo was asked whether scale armor was present during a specific time period in ancient Egypt. The LLM said yes, but the technology only appeared in Egypt 1,500 years later.
Why are LLMs bad at answering technical historical questions, when they can be so good at answering very complicated questions about things like coding? Del Rio-Chanona told TechCrunch that it’s likely because LLMs tend to extrapolate from historical data that is very prominent, finding it difficult to retrieve more obscure historical knowledge.
For example, the researchers asked GPT-4 if ancient Egypt had a professional standing army during a specific historical period. While the correct answer is no, the LLM answered incorrectly that it did. This is likely because there is lots of public information about other ancient empires, like Persia, having standing armies.
Top LLMs performed poorly on a high-level history test, a new paper has found.Charles Rollet (TechCrunch)
The state is seeing a sharp water divide this year, with lots of rain in the north while the south has stayed dry. A hydrologist explains what’s happening.The Conversation
More than 40% of individual corals monitored around One Tree Island reef bleached by heat stress and damaged by flesh-eating diseaseGraham Readfearn (The Guardian)