The optimists guide to artificial intelligence and work

The optimist’s guide to artificial intelligence and work

It’s easy to fear that the machines will take over: companies like IBM and British telecoms company BT are citing artificial intelligence as a reason for downsizing, and new tools like ChatGPT and DALL-E are making it possible for anyone to understand extraordinary skills of artificial intelligence for itself. A recent study by researchers from OpenAI (the startup behind ChatGPT) and the University of Pennsylvania concluded that about 80 percent of jobs could have at least 10 percent of the tasks automated using the technology behind such tools.

“Everyone I talk to, super smart people, doctors, lawyers, CEOs, other economists, your brain first asks, ‘Oh, how can generative AI replace what humans are doing?'” said Erik Brynjolfsson, a professor at the Stanford Institute for Human-Centered AI.

But that’s not the only option, he said. “I also wish people would think more about what new things could be done now that have never been done before.” Obviously that’s a much harder question.” He added that that’s also where “the greatest value lies “.

Brynjolfsson and other economists say how technology developers design AI tools, corporate leaders use them, and policymakers regulate them will determine how generative AI ultimately impacts jobs. And not all decisions are necessarily grim for workers.

AI can complement human work instead of replacing it. For example, many companies use AI to automate call centers. But a Fortune 500 company that provides enterprise software has instead used a tool like ChatGPT to give its employees live suggestions on how to respond to customers. In a study, Brynjolfsson and his co-authors compared call center agents who used the tool with those who did not. They found that the tool increased productivity by an average of 14 percent, with most of the gains coming from low-skilled workers. Customer sentiment was also higher and employee turnover lower in the group using the tool.

David Autor, an economics professor at the Massachusetts Institute of Technology, said AI could potentially be used to provide “expertise on demand” in professions such as healthcare delivery, software development, law and skilled repair. “It’s an opportunity to give more workers the opportunity to do valuable work that relies on some of that expertise,” he said.

Employees can focus on different tasks. As ATMs automated cash dispensing and deposit taking, the number of bank employees increased, according to an analysis by James Bessen, a researcher at Boston University School of Law. This was partly because, while bank branches required fewer workers, they became cheaper to open – and banks opened more of them. But banks also changed the job profile. After ATMs, tellers focused less on counting cash and more on building relationships with customers to whom they sold products like credit cards. Few jobs can be fully automated by generative AI. However, using an AI tool for some tasks can allow employees to expand their workload to tasks that cannot be automated.

New technologies can lead to new jobs. In 1900, nearly 42 percent of the labor force was employed in agriculture, but due to automation and technological advances, that figure was down to 2 percent in 2000. The huge decline in agricultural jobs did not result in widespread unemployment. Instead, technology has created many new jobs. A farmer in the early 20th century could not have imagined computer programming, genetic engineering or truck transport. In an analysis using census data, Autor and his co-authors found that 60 percent of current job specialties didn’t exist 80 years ago.

Of course, there is no guarantee that workers will be qualified for new jobs or that they will be good jobs. And none of that just happens, said Daron Acemoglu, MIT economics professor and co-author of “Power and Progress: Our 1,000-Year Struggle Over Technology & Prosperity.”

“If we make the right decisions, we will actually create new types of jobs, which is critical for wage growth and also for realizing the productivity benefits,” Acemoglu said. “But if we don’t make the right decisions, a lot less of that can happen.” – Sarah Kessler

Martha’s exemplary behavior. Lifestyle entrepreneur Martha Stewart became the oldest person to be pictured on the cover of Sports Illustrated’s swimsuit issue this week. Stewart, 81, told The Times that having the confidence to pose was a “big challenge” but that two months of Pilates helped. She’s not the first person over 60 to wear this award: Maye Musk, Elon Musk’s mother, graced the cover last year at the age of 74.

TikTok block. Montana became the first state to ban the Chinese short-video app, blocking app stores from offering TikTok within its borders from Jan. 1. The ban is expected to be difficult to enforce, and TikTok users across the state have sued the government. The move violates their First Amendment rights and provides a glimpse of the backlash that could happen if the federal government attempts to block TikTok statewide .

Blaming bankers. Greg Becker, the former CEO of Silicon Valley Bank, blamed “rumors and misconceptions” for the rise in deposits in his first public comments since the lender’s collapse in March. Becker and former top executives at the failed Signature Bank also told a Senate committee investigating their role in the bank collapse that they would not be repaying millions of dollars in wages.

When OpenAI CEO Sam Altman testified before Congress this week calling for regulation of generative artificial intelligence, some lawmakers hailed it as a “historic” move. In fact, asking lawmakers for new rules is a step straight out of the tech industry playbook. Silicon Valley’s most powerful leaders have long traveled to Washington to demonstrate their commitment to and seek to shape rules while non-stop unleashing some of the world’s most powerful and transformative technologies.

One reason: One federal rule is much easier to enforce than different regulations in different states, Bruce Mehlman, a policy adviser and former technology policy official in the Bush administration, told DealBook. Clearer regulations also boosted investor confidence in a sector, he added.

The strategy sounds reasonable, but when history is a useful guide, the reality can be messier than the rhetoric:

  • In December 2021, Sam Bankman-Fried, founder of failed crypto exchange FTX, was one of six executives to testify in the House of Representatives about digital assets, calling for regulatory clarity. His company just tabled a proposal for a “single common regime,” he told lawmakers. A year later, Bankman-Fried’s businesses were bankrupt and he was charged with fraud and illegal campaign contributions.

  • In 2019, Facebook founder Mark Zuckerberg wrote an opinion piece in The Washington Post titled “The Internet Needs New Rules,” which referred to failures in content moderation, election integrity, privacy and corporate data governance. Two years later, independent researchers found that misinformation was more prevalent on the platform than it was in 2016, even though the company had spent billions to stamp it out.

  • In 2018, Apple CEO Tim Cook said he was fundamentally averse to regulation but advocated stricter privacy rules: “It’s about time a group of people started thinking about what can be done.” But about his business in China , one of its largest markets, Apple has largely ceded control of customer data to the government as part of its requirements to operate there.

Platforms like TikTok, Facebook, Instagram and Twitter use algorithms to identify and moderate problematic content. In order to fend off these digital moderators and to enable a free exchange on taboo topics, a linguistic code has been developed. It’s called Algospeak.

“A language arms race is raging on the Internet – and it’s not clear who is winning” writes Roger J. Kreuz, professor of psychology at the University of Memphis. Posts about sensitive topics like politics, sex, or suicide can be flagged and removed by algorithms, leading to the use of creative misspellings and surrogate names like “seggs” and “mascara” for sex, “unalive” for death, and “cornucopia” for homophobia. Kruz notes that there is a history of responding to bans with code, such as 19th-century England’s Cockney rhyme slang or “Aesopian,” an allegorical language used in Tsarist Russia to circumvent censorship.

Algorithms aren’t the only ones that don’t catch the code. The euphemisms and misspellings are ubiquitous, especially in marginalized communities. But sometimes the hidden language also eludes people, which can lead to potentially problematic misunderstandings online. In February, socialite Julia Fox got into an embarrassing argument with a sexual assault victim after misreading a post about “mascara,” and had to publicly apologize for reacting inappropriately to what she believed to be a discussion about makeup had.

Thank you for reading!

We appreciate your feedback. Please email your thoughts and suggestions to [email protected].