Context still matters

Nakalembe uses machine learning, computer vision, and deep learning models to map cropland, classify crop types, and estimate yields in Uganda, Kenya, Senegal, and other African nations. But most AI models are trained on European and U.S. data, and are largely useless unless they are adapted for local contexts, she told Rest of World.

“AI systems built in the West often also fail to account for the contexts of the Global South, including high internet costs, limited bandwidth, and a lack of labeled training data,” said Nakalembe, an assistant professor at the University of Maryland, and Africa program director at NASA Harvest, which uses satellite imagery to improve agricultural production.

“If these systems aren’t adapted, they remain irrelevant, potentially deepening existing inequalities in wealth and access to resources, [and] there is a risk that these systems prioritize corporate and company profit over farmers,” she said.

“It was such a joyous moment to see water collecting into the stepwell after clearing 40 years of garbage,” says Hajira Adeeb, a 45-year-old resident of Bansilalpet, who grew up seeing the well become transformed from the community’s water source to a dumping ground. “I visit almost every day. The area is clean and lit up in the evenings. I enjoy sitting there.”

India is famed for its stepwells – multi-storey structures built to provide access to groundwater, with steps and platforms descending to the water level. Thousands were built across the country near natural aquifers – underground porous rock saturated with water – mostly between the 11th and 18th centuries.

The wells were abandoned under the rule of the British, who considered them unhygienic and largely prohibited their use, and deteriorated further in the late 20th century when people started to use them as a place to discard rubbish.

The pictures are story are worth a look. I want to learn more about stepwells.

Wikipedia spelunking

I wrote a script to grab five random wikipedia articles every day. Sometimes it pays off with something interesting I'd probably never have read about. Like this one about Korean rain gauges:

Ch'ŭgugi... were rain gauges invented and used during the Joseon dynasty of Korea. They were invented and supplied to each provincial office during the reign of King Sejong the Great.

Early in the Joseon dynasty, a system was introduced to measure and report regional rainfall for the sake of agriculture. However, the method to measure rainfall in those days was primitive, recording the depth of rain water in puddles.

This method could not tell the exact rainfall, because rainwater is absorbed differently into the ground according to the local soil. To prevent errors of this kind, King Sejong the Great ordered the Gwansanggam... to build a rainwater container, the ch'ŭgugi, made of iron in August 1441.

Only in the past five years has Ducrot, who turned ninety-three in June, become internationally recognized for her art, which she didn’t even begin making until she was in her fifties. When creating her works, she stands and uses a brush sometimes attached to a stick, sweeping loose arcs of ink or paint onto paper or fabric. She often later incorporates scraps of other papers or textiles. Her painted collages usually depict ecstatic figures and stylized landscapes; arrays of ovals or checkered patterns are a recurring feature. Typically made in series, her works are light, energetic, and uninhibitedly beautiful.

Lovely profile in the New Yorker of Isabella Ducrot

Karpathy’s first frequently asked question is “Does the model ‘understand’ anything?” “That’s a philosophical question,” he answers diplomatically, “but mechanically: no magic is happening.” Does 200 lines of Python code understand anything? My siblings in Christ I hope it’s clear how utterly bizarre this question is. And it translates directly to the same question for Anthropic’s Claude, which is not doing anything different. If we make the input file bigger, if we make the way it gets mathematically processed more efficient, if we prepend a long document describing how we imagine a helpful robot might act to the user’s input, at which of those steps does “understanding” happen?

AI isn't people