It was supposed to be groundbreaking. When New York City’s task force to develop policy on algorithm technologies was introduced two years ago, it was praised as a beacon of transparent and equitable government. It was supposed to inform other policymakers grappling with how to address their own use of automated technologies that make decisions in place of humans.
But for all its good intentions, the effort was bogged down in a bureaucratic morass. The task force failed at even completing a first necessary stepin its work: getting access to basic information about automated systems already in use, according to task force members and observers
Over the past decade, algorithmic accountability has become an important concern for social scientists, computer scientists, journalists, and lawyers. Exposés have sparked vibrant debates about algorithmic sentencing. Researchers have exposed tech giants showing women ads for lower-paying jobs, discriminating against the aged, deploying deceptive dark patterns to trick consumers into buying things, and manipulating users toward rabbit holes of extremist content. Public-spirited regulators have begun to address algorithmic transparency and online fairness, building on the work of legal scholars who have called for technological due process, platform neutrality, and nondiscrimination principles.
As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it’s still a curve-fitting exercise, albeit complex and nontrivial.
SAN FRANCISCO — In recent months, satellite photos have streamed into a former textile factory here revealing a build-up of potentRussian air defense systems in Ukraine, a serious new threat to NATO aircraft.This is not a secret CIA facility, and the images didn’t come from a billion-dollar surveillance satellite.They were taken by private spacecraft — some the size of a loaf of bread — operated by Planet Labs, a Silicon Valley company that is leading a revolution in how humans glimpse Earth from space.A short stroll from the downtown San Francisco headquarters of Yelp and LinkedIn, Planet operates the largest and least expensive fleet of satellites in history — the first to take pictures of the entire landmass of the globe, once a day, and sell them to the public.
Since the introduction of so-called less-lethal weapons in France in the late 1990s, there has been no legal requirement to collect data on injuries induced by kinetic impact projectiles, and no epidemiological surveys have been planned. To estimate the number of patients with ocular injuries caused by the use of these defensive tools, a retrospective survey was sent to all ophthalmology department chairs in French university hospitals, which are where the most severe cases are managed.
Since August, as vast stretches of the Amazon rainforest were being reduced to ashes and outrage and calls for action intensified, a group of lawyers and activists who have been advancing a radical idea have seen a silver lining in the unfolding tragedy: One day, a few years from now, they imagined Brazil’s president, Jair Bolsonaro, being hauled to The Hague to stand trial for ecocide, a term broadly understood to mean the willful and widespread destruction of the environment, and one that, they hope, will eventually be on par with other crimes against humanity