The Download: China’s Social Credit Law and Robot Dog Navigation


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s happening in the tech world.

This is why China’s new social credit law matters

It’s easier to talk about what China’s social credit system isn’t than what it is. Ever since China announced plans to build it in 2014, it’s been one of the most misunderstood things about China in the West. discourse. Now, with the new documents released in mid-November, there is an opportunity to correct the record.

Most people outside of China assume it will act as a Black Mirror-style system powered by technology to automatically rate each Chinese citizen based on what they have done right and wrong. Instead, it’s a mix of attempts to regulate the lending industry, to allow government agencies to share data with each other, and to promote state-sanctioned moral values—as vague as that sounds.

Although the system itself will still take a long time to materialize, with the release of a draft law last week, China is now closer than ever to defining what it will look like — and how it will affect the lives of millions of citizens. Read the full story.

— Zeyi Yang

Watch this robot dog climb difficult terrain using only its camera

The news: When Ananie Agarwal took her dog for a walk up and down the stairs at a local park near Carnegie Mellon University, other dogs stopped in their tracks. That’s because Agarwal’s dog was a robot—and a special one at that. Unlike other robots that tend to rely heavily on an internal map to navigate, his robot uses a built-in camera and uses computer vision and augmentation to walk over difficult terrain.

Why it matters: While other attempts to use camera cues to guide a robot’s movement have been limited to flat terrain, Agarwal and his fellow researchers were able to get their robot to climb stairs, climb rocks and jump gaps. They hope their work will help facilitate the deployment of robots in the real world and greatly improve their mobility in the process. Read the full story.

— Melissa Heikkila

Trust large language patterns at your own risk

When Meta launched Galactica, a large open-source language model, the company was hoping for a big PR win. Instead, all he got was Twitter criticism and a scathing blog post from one of his most vocal critics, culminating in his awkward decision to pull the plug on the model’s public display after just three days.

Galactica was intended to help scientists by summarizing academic reports and solving math problems, among other tasks. But outsiders quickly pushed the model to provide “scientific research” on the benefits of homophobia, anti-Semitism, suicide, glass-eating, whiteness, or masculinity — demonstrating not only how premature its failed launch was, but how inadequate the researchers’ AI efforts to make the big language patterns were safer. Read the full story.

This story is from The Algorithm, our weekly newsletter that gives you the inside scoop on all things AI. Sign up to receive it in your inbox every Monday.

The required readings

I’ve scoured the internet to find you today’s funniest/important/scary/fascinating tech stories.

1 Verified Anti-Vax Twitter Accounts Spread Health Misinformation
And it perfectly demonstrates the problem with charging for verification in the process. (The Keeper)
+ Maybe Twitter hasn’t helped your career as much as you think. (Bloomberg $)
+ FTX Founder Deepfake Spreads on Twitter. (Motherboard)
+ Some of the liberal Twitter users refuse to quit. (Atlantic $)
+ The Twitter bloodbath is apparently over. (On the edge)
+ The potential collapse of Twitter could erase vast records of recent human history. (MIT Technology Review)

2 NASA’s Orion spacecraft has completed its orbit around the Moon
Paving the way for the return of men to the moon. (Vox)

3 Amazon’s warehouse monitoring algorithms are trained by humans
Poorly paid workers in India and Costa Rica sift through thousands of hours of mind-blowing footage. (On the edge)
+ The AI ​​data labeling industry is deeply exploitative. (MIT Technology Review)

4 How to make sense of climate change
Accepting the hard facts is the first step towards avoiding the darkest end for the planet. (New Yorker $)
+ The world’s richest nations have agreed to pay for global warming. (Atlantic $)
+ These three charts show who is most to blame for climate change. (MIT Technology Review)

5 Apple Unveils Fraudulent Dealings of Cyber ​​Security Startup
She compiled a document that illustrated the extent of Corellium’s ties, including to the infamous NSO Group. (with cable $)
+ The hacking industry is facing the end of an era. (MIT Technology Review)

6 The crypto industry still feels uneasy
Stocks on the biggest exchange fell to historic lows. (Bloomberg $)
+ UK wants to crack down on game trading apps. (FT$)

7 The criminal justice system fails neurodivergent people
Impersonating an online troll led to an autistic man being jailed for five and a half years. (Economist $)

8 Your workplace may be planning to scan your brain 🧠
All in the name of becoming a more effective employee. (IEEE Spectrum)

9 Facebook doesn’t care if your account is hacked
A series of new account rescue solutions don’t seem to have much effect. (WP$)
+ Parent company Meta is being sued in the UK over data collection. (Bloomberg $)
+ Independent creators build the metaverse their own way. (Motherboard)

10 Why Training AI to Generate Images on Generated Images is a Bad Idea
“Contaminated” images will only confuse them. (New Scientist $)
+ Facial recognition software used by the US government is reportedly not working. (Motherboard)
+ The dark secret behind these cute AI-generated animal images. (MIT Technology Review)

Quote of the day

“I feel like they used to care more.”

— Ken Higgins, an Amazon Prime member, is losing faith in the company after a series of disappointing delivery experiences, he told the Wall Street Journal.

The big story

What if you could diagnose diseases with a swab?

February 2019

On an unremarkable side street in Oakland, California, Riddy Tariyal and Stephen Gear are trying to change the way women monitor their health.

Their plan is to use blood from used tampons as a diagnostic tool. In this menstrual blood, they hope to find early markers of endometriosis and eventually various other diseases. The simplicity and ease of this method, if it works, will represent a great improvement over today’s standard of care. Read the full story.

— Dana Evans

We can still have good things

A place of comfort, entertainment and distraction in these strange times. (Any ideas? Email me ortweet them to me.)

+ Happy Thanksgiving—in your nightmares!
+ Why Keith Haring’s legacy is more visible than ever, 32 years after his death.
+ Even the gentrified world of assembling dinosaur skeletons is not immune to scandal.
+ Pumpkins are a Thanksgiving staple, but it wasn’t always that way.
+ If I lived in a frozen wasteland, I’m pretty sure I’d also be the angriest cat in the world.





Source link