Signal (messaging app)

Signal was recommended by a tech-savvy friend as it apparently does not harvest or sell our data.

I just started using it.

How commonly used is it in Taiwan? Are forumosans enthusiastic about it, neutral, or indifferent?

Guy

1 Like

I use it. But not many people on it. I find Telegram is growing.

3 Likes

I use it and try to direct people to do the same. Most of my friends who have jobs that require some kind of clearance use it. I have my family in it. In Taiwan…not so common.

3 Likes

I use it with my family and friends in other countries. Nobody in Taiwan though. Taiwan seems hooked on Line.

1 Like

I use signal as my primary communications app, and managed to get my friends and family on it. It is a good app, but recently they have had some negative press. The only people who are going to be using it here are those who are interested in security and privacy. Signal is not about anonymity though, it is far from it so if that’s what you are looking for, look elsewhere.

1 Like

Do you know about what?

Guy

Negative press can sometimes be a good sign. The establishment does not want to lose of control of messaging.

I’m still looking for a web-based messaging platform that doesn’t require smartphone sign up. I haven’t used a smartphone for a couple of years.

1 Like

Unfortunately that excludes Signal, which does require a phone number and a scanning of QR codes to link up additional devices (example: your laptop or desktop).

Guy

Do you mean the crypto payments beta testing?

I use it to talk to family. Since it can be switched to the primary texting app. That way I’m “texting” them without an extra app like line. Same app I send 1922 to, makes it easier.

1 Like

That’s useful advice. Thanks!

Guy

1 Like

Yes. It excludes most alt messaging apps. Telegram as well. Kind of annoying. I wonder if there’s one which works with dumbphones. That’d be fun.

I use Chatzy to chat with a couple of people. No encryption. It’s just a fun chatroom app. Anything to get away from that Orwellian FB feeling.

I have some programming experience. I played around with this and got it working on my local environment.

…but it’d take 15-30 hours work to get a full messaging web-app going and I don’t have the energy right now. Summer is too hot.

This is an interesting topic. I think a large percentage of people are unhappy with messaging right now.

1 Like

Yep. A lot of people left Signal because of this.

I know an organization who previously used Signal amongst staff who also moved away too.

I also felt it was a weird move. Why add that to a simple messaging app?

It certainly hurts their credibility.

The current president of Signal and former Google executive Meredith Whittaker speaks to Nikkei Asia about AI and what the future may bring.

Signal president warns of risks from U.S.-China AI race

Spread of misinformation and climate damage among possible harms, says Whittaker

Summary

Meredith Whittaker, president of the Signal Foundation, says AI development poses an environmental threat, on top of raising social and governmental problems. (Source photos by Florian Hetz and screenshot from Signal’s website)

CISSY ZHOU, Nikkei staff writerApril 6, 2023 14:06 JST

HONG KONG – The race between the U.S. and China for supremacy in artificial intelligence is cause for concern given the technology’s potential to spread misinformation, enhance government surveillance and harm the climate, warned Meredith Whittaker, president of the Signal Foundation.

“AI is kind of a marketing term and a lot of companies selling very limited AI technologies brand themselves as AI companies. However, it is a technology that only a handful of companies and governments have the power to develop and deploy,” Whittaker, a former manager at Google, told Nikkei Asia in a recent interview in Tokyo.

AI has emerged as the latest front in the U.S.-China competition. The Communist Party has built a massive AI-based surveillance system across China and aims for the country to become the “world’s premier artificial intelligence innovation center” by 2030. American businesses and academics, meanwhile, are calling on Washington to pour funds into AI development to avoid being left behind by China.

Whittaker, who founded Google’s Open Research group to address issues such as the social impact of artificial intelligence, said AI advances in the past decade have been based on the “huge amount of data and computing power that only the largest tech companies in the U.S. and China possess.”

Whittaker joined the board of Signal, the foundation behind the free encrypted messaging app of the same name, in 2020, and in 2022 became its president. In 2021, she was tapped by the Federal Trade Commission to serve as a senior adviser on artificial intelligence.

In her interview with Nikkei, she spoke specifically about ChatGPT, whose popularity has spurred big tech companies in both the U.S. and China to accelerate plans for their own versions of the AI-powered chatbot.

“ChatGPT is highly hyped. There are real concerns about intellectual property theft, its capacity to manufacture misinformation and to muddy the waters of the information ecosystem,” Whittaker said.

Her comments came as dozens of tech leaders, including Elon Musk, signed an open letter last week calling for a global halt on the training of AI systems that are “more powerful than GPT-4” for at least six months, citing “profound risks to society and humanity.” Microsoft-backed OpenAI released GPT-4 last month, a more powerful version of the technology that underpins ChatGPT.

Whittaker’s concerns, however, are not limited to a single platform or company.

“AI technologies are tools that are built in concentrated resources, and they can be powerfully applied to augment social control, to increase surveillance capabilities, to automate socially significant decision making, and to reduce the power and standing of workers,” she said.

The “arms race” between the world’s two largest economies, moreover, is being used in the U.S. to fight tighter regulations on the tech sector, according to Whittaker.

“We increasingly see China-centered arguments against tech accountability and antitrust that frame these regulatory interventions as a barrier to national progress in this so-called ‘AI arms race.’ But AI technologies can help increase the power of those who already have it – the states and large firms – over those who don’t. This is not a justification that the majority of us should be content with,” she said.

In a military context, AI has a wide range of applications, from surveillance and reconnaissance to logistics and command and control capabilities. The People’s Liberation Army of China spent more than $1.6 billion annually in recent years on AI-enabled systems, according to a 2021 estimate by the Center for Security and Emerging Technology, a policy research organization at Georgetown University. The U.S. Department of Defense, meanwhile, likely spent between $800 million and $1.3 billion.

In terms of private investment, however, the U.S. leads the world. According to the Artificial Intelligence Index Report 2023 published by Stanford University on Monday, private AI investment in the U.S. came to $47.4 billion in 2022, compared with $13.4 billion for China, in second place.

China, however, maintains a lead in terms of various types of AI research publications.

Apart from its social and governmental impact, Whittaker said AI development also poses an environmental threat.

“From a climate footprint perspective, training a large language model is very computationally intense, so it takes a huge amount of computer power, which takes a huge amount of energy,” she said.

Although there is no standard benchmark for tracking the carbon intensity of AI systems, training the Bloom model – the world’s largest open-science, open-access multilingual large language model – emitted 25 times more carbon than an air traveler making a one-way trip from New York to San Francisco, according to the Stanford report.

“I am not anti tech companies or anti government. But I think that the incentives driving tech companies and politicians can lead to harmful outcomes, and AI’s harms and flaws must be open to question,” Whittaker said.

Guy