Japan may abandon the development of 5G networks for the health of citizens | Красная Армия

Japan will not develop and invest in the creation of fifth-generation mobile networks. This is stated on the official website of the Ministry of High Technologies of the country. Officials expressed the opinion that the creation of a faster Internet than previously is dangerous for the population and may adversely affect people’s health and labor productivity.

A study by the University of Nagasaki provides figures confirming the increase in the number of mental disorders and fatigue among high-speed Internet users. According to scientists, the more a person manages to conduct intellectual operations per minute, the more he is prone to the development of stress, absent-mindedness and an increase in nervous excitement. The Ministry of Health supplements colleagues and assures that using 5G networks can reduce the average life expectancy in the country for the first time in 60 years, and the impact of high-speed Internet is comparable to the effects of radiation on the human body.

“We need to think about the health of the Japanese. Our country has already accelerated economically so that today we have nowhere to hurry. It is necessary to stabilize this state. 5G is fraught with great danger. So we think not only we, yesterday this information was confirmed by the US ambassador to Japan. If America, China or the EU countries are ready to risk their people for the sake of progress, then such a strategy is unacceptable for our welfare state, ”said Nobunari Kabato, Minister of High Technologies.

The bill to ban fifth-generation mobile networks in the country has already been submitted to the Japanese parliament.

Source: Красная АрмияWeb Archive

Fake news is real — Artificial Intelligence is going to make it much worse | CNBC

“The Boy Who Cried Wolf” has long been a staple on nursery room shelves for a reason: It teaches kids that joking too much about a possible threat may turn people ignorant when the threat becomes an actual danger.

President Donald Trump has been warning about “fake news” throughout his entire political career putting a dark cloud over the journalism professional. And now the real wolf might be just around the corner that industry experts should be alarmed about.

The threat is called “deepfaking,” a product of AI and machine learning advancements that allows high-tech computers to produce completely false yet remarkably realistic videos depicting events that never happened or people saying things they never said. A viral video starring Jordan Peele and “Barack Obama” warned against this technology in 2018, but the message was not enough to keep Jim Carrey from starring in “The Shining” earlier this week.

The danger goes far beyond manipulating 1980s thrillers. Deepfake technology is allowing organizations that produce fake news to augment their “reporting” with seemingly legitimate videos, blurring the line between reality and fiction like never before — and placing the reputation of journalists and the media at greater risk.

Ben Zhao, a computer science professor at the University of Chicago, thinks the age of getting news on social media makes consumers very susceptible to this sort of manipulation.

“What the last couple years has shown is basically fake news is quite compelling even in [the] absence of actual proof. … So the bar is low,” Zhao said.

The bar to produce a convincing doctored video is lower than people might assume.

Earlier this year a clip purporting to show Democratic leader Nancy Pelosi slurring her words when speaking to the press was shared widely on social media, including at one point by Trump’s attorney Rudy Giuliani. However, closer inspection revealed that the video had been slowed to 75% of its normal speed to achieve this slurring effect, according to the Washington Post. Even with the real video now widely accessible, Hany Farid, a professor at UC Berkeley’s School of Information and a digital forensics expert, said he still regularly receives emails from people insisting the slowed video is the legitimate one.

“Even in these relatively simple cases, we are struggling to sort of set the record straight,” Farid said.

It would take a significant amount of expertise for a fake news outlet to produce a completely fabricated video of Oprah Winfrey endorsing Trump, but researchers say the technology is improving every day. At the University of Washington, computer vision researchers are developing this technology for positive, or at least benign, uses like making video conferencing more realistic and letting students talk to famous historical figures. But this research also leads to questions about potential dangers, as the attempts made by attackers are expected to continually improve.

How to detect a deepfake

To make one of these fake videos, computers digest thousands of still images of a subject to help researchers build a 3-D model of the person. This method has some limitations, according to Zhao, who noted the subjects in many deepfake videos today never blink, since almost all photographs are taken with a person’s eyes open.

However, Farid said these holes in the technology are being filled incredibly rapidly.

“If you asked me this question six months ago, I would’ve said, ‘Yeah, [the technology] is super cool, but there’s a lot of artifacts, and if you’re paying attention, you can probably tell that there’s something wrong,’” Farid said. “But I would say we are … quickly but surely getting to the point where the average person is going to have trouble distinguishing.”

In fact, Zhao said researchers believe the shortcomings that make deepfake videos look slightly off to the eye can readily be fixed with better technology and better hardware.

“The minute that someone says, ‘Here’s a research paper telling you about how to detect this kind of fake video,’ that is when the attackers look at the paper and say, ‘Thank you for pointing out my flaw. I will take that into account in my next-generation video, and I will go find enough input … so that the next generation of my video will not have the same problem,’” Zhao said.

“Once we live in an age where videos and images and audio can’t be trusted … well, then everything can be fake.” – Hany Farid, Professor at UC Berkeley’s School of Information

One of the more recent developments in this field is in generating speech for a video. To replicate a figure such as Trump’s voice, computers can now simply analyze hundreds of hours of him speaking. Then researchers can type out what they want Trump to say, and the computer will make it sound as if he actually said it. Facebook, Google and Microsoft have all more or less perfected this technology, according to Farid.

Manipulated videos of this sort aren’t exactly new — Forest Gump didn’t actually meet JFK, after all. However, Farid says this technology is hitting its stride, and that makes the danger new.

“To me the threat is not so much ‘Oh, there’s this new phenomenon called deepfakes,’” Farid said. “It’s the injection of that technology into an existing environment of mistrust, misinformation, social media, a highly polarized electorate, and now I think there’s a real sort of amplification factor because when you hear people say things, it raises the level of belief to a whole new level.”

The prospect of widespread availability of this technology is raising eyebrows, too. Tech-savvy hobbyists have long been using deepfakes to manufacture pornography, a consistent and comically predictable trend for new technology. But Zhao believes it is only a matter of time before the research-caliber technology gets packaged and released for mass-video manipulation in much broader contexts.

“At some point someone will basically take all these technologies and integrate and do the legwork to build a sort of fairly sophisticated single model, one-stop shop … and when that thing hits and becomes easily accessible to many, then I think you’ll see this becoming much more prevalent,” Zhao said. “And there’s nothing really stopping that right now.”

Facing a massive consumer trust issue

When this happens, the journalism industry is going to face a massive consumer trust issue, according to Zhao. He fears it will be hard for top-tier media outlets to distinguish a real video from a doctored one, let alone news consumers who haphazardly stumble across the video on Twitter.

“Once we live in an age where videos and images and audio can’t be trusted … well, then everything can be fake,” Farid said. “We can have different opinions, but we can’t have different facts. And I think that’s sort of the world we’re entering into when we can’t believe anything that we see.”

Zhao has spent a great deal of time speaking with prosecutors, judges — the legal profession is another sector where the implications are huge — reporters and other professors to get a sense for every nuance of the issue. However, despite his clear understanding of the danger deepfakes pose, he is still unsure of how news outlets will go about reacting to the threat.

“Certainly, I think what can happen is … there will be even less trust in sort of mainstream media, the main news outlets, legitimate journalists [that] sort of react and report real-time stories because there is a sense that anything that they have seen … could be in fact made up,” Zhao said.

Then it becomes a question of how the press deal with the disputes over reality.

“And if it’s someone’s word, an actual eyewitness’ word versus a video, which do you believe, and how do you as an organization go about verifying the authenticity or the illegitimacy of a particularly audio or video?” Zhao asked.

Defeating the deepfakes

Part of this solution may be found in the ledger technology that provides the digital infrastructure to support cryptocurrencies like bitcoin — the blockchain. Many industries are touting blockchain as a sort of technological Tylenol. Though few understand exactly how it works, many swear it will solve their problems.

Farid said companies like photo and video verification platform Truepic, to which he serves as an advisor, are using the blockchain to create and store digital signatures for authentically shot videos as they are being recorded, which makes them much easier to verify later. Both Zhao and Farid are hoping social platforms like Facebook and Twitter will then promote these videos that are verified as authentic over non-verified videos, helping to halt the spread of deepfakes.

“The person creating the fake always has the upper hand,” Farid said. “Playing defense is really, really hard. So I think in the end our goal is not to eliminate these things, but it’s to manage the threat.”

Until this happens, Zhao said the fight against genuinely fake news may not start on a ledger, but in stronger consumer awareness and journalists banding together to better verify sources through third parties.

“One of the hopes that I have for defeating this type of content is that people are just so inundated with news coverage and information about these types of videos that they become fundamentally much more skeptical about what a video means and they will look closer,” Zhao said. “There has to be that level of scrutiny by the consumer for us to have any chance of fighting back against this type of fake content.”

Nicholas Diakopoulos, assistant professor in Northwestern University’s School of Communication and expert on the future of journalism, said via email that the best solutions involve a mix of educational and sociotechnical advances.

“There are a variety of perceptual cues that can be tip-offs to a deepfake and we should be teaching those broadly to the public,” he said.

Diakopoulos has referenced Farid’s work on photo forensics among ideas outlined in an article he wrote for the Columbia Journalism Review last year. He also cited a research project called FaceForensics that uses machine learning to detect, with 98.1% accuracy, whether a video of a face is real. Another research technique under study: Blood flow in video of a person’s face can be analyzed in order to see if pixels periodically get redder when the heart pumps blood.

“On the sociotechnical side, we need to develop advanced forensics techniques that can help debunk synthesized media when put into the hands of trained professionals,” he told CNBC. “Rapid response teams of journalists should be trained and ready to use these tools during the 2020 elections so they can debunk disinformation as quickly as possible.”

Diakopoulos has studied the implications of deepfakes for the 2020 electionsspecifically. He also has written papers on how journalists need to think when “reporting in a machine reality.”

And he remains optimistic.

“If news organizations develop clear and transparent policies of their efforts using such tools to ensure the veracity of the content they publish, this should help buttress their trustworthiness. In an era when we can’t believe our eyes when we see something online, news organizations that are properly prepared, equipped and staffed are poised to become even more trusted sources of information.”

Source: CNBC

Assange psychologically tortured to ‘breaking point’ by ‘democratic states,’ UN rapporteur tells RT | RT.com

Jailed WikiLeaks co-founder Julian Assange shows clear signs of degrading and inhumane treatment which only adds to his deteriorating health, UN Special Rapporteur on Torture Nils Melzer told RT.

Assange has “all the symptoms typical for a person who has been exposed to prolonged psychological torture,” Melzer told RT’s Afshin Rattansi. This adds to the toll of his deteriorating physical state caused by a lack of adequate medical care for several years, he said.

Melzer said he was judging from two decades of experience in working with POWs and political prisoners, and only after applying “scientific” UN methods to assess Assange’s condition. But the journalist’s case still “shocked” him.

An individual has been isolated and singled out by several democratic states, and persecuted systematically… to the point of breaking him.

Earlier this month, a UK court sentenced the WikiLeaks co-founder to nearly a year in jail for skipping bail in 2012. The courts are now deciding whether to extradite Assange to the US where he is wanted for 17 charges under the Espionage Act. He can end up serving up to 175 years in prison if proven guilty.

Also in May, Sweden reopened an investigation into the allegations of rape by Assange, which he denies. The probe was originally dropped in 2017.

WikiLeaks warned that the journalist’s health had “significantly deteriorated” during the seven years he spent living in the Ecuadorian Embassy in London, and continued to worsen after he was evicted in April and placed in a British prison. According to WikiLeaks, he was recently moved to the prison’s “hospital wing.”

 

‘No future for dissidents’ on social media: Paul Joseph Watson reflects on Facebook ban

Popular internet pundit Paul Joseph Watson is mulling legal action after being banned from Facebook for spreading “hate,” telling RT that it’s clear social media platforms are cracking down on dissident political speech.

Facebook kicked Watson off its platform on May 2 – along with conservative commentator Laura Loomer, Infowars founder Alex Jones, and black nationalist leader Louis Farrakhan. The group was accused of spreading “hateful” content, although no warnings or concrete reasons were provided for their seemingly arbitrary bans.

Watson, who runs a YouTube channel that boasts more than 1.5 million subscribers, has become a well-known but polarizing commentator on culture and politics. A long-time Infowars contributor, Watson now has his own outlet, Summit News.

Although he’s been labeled as an “alt-right” conspiracy theorist, Watson insists that he’s been smeared – and de-platformed – simply because he holds contrarian views.

“There is clearly no future for dissident personalities on any major social media network. We will have to go back to mailing lists and websites as our primary, and perhaps only platforms for delivering content,” he said.

He told RT that he’s hired the “best media lawyers in London” who “have taken on numerous media giants in the past and won” and will advise him about what legal recourse he has against Facebook.

The first step towards suing Facebook over his ban, according to Watson, is to initiate a written request, called a Subject Access Request, which requires the company to release all information relevant to the individual’s case under Section 7 of the Data Protection Act.

The information would be needed to verify if an individual violated community standards or if the company merely made a politically-motivated decision. Watson also intends to put Facebook on notice about the harm they have caused to his reputation by putting him under the category of “dangerous individuals,” which is one of Facebook’s stated reasons for banning people under the company’s community standards.

The list Facebook has made for what counts as “dangerous individuals” includes: Terrorist activity, organized hate, mass or serial murder, human trafficking, and organized violence or criminal activity. Facebook would have to verifiably prove that the banned person engaged in any of these activities, or the decision could count as defamation of character.

The hate watchdog organization Southern Poverty Law Centre (SLPC) has publicly admitted it was behind the censorship. In its statement, the SPLC writes how the banning of these individuals shows that social media companies are responding to their “pressure,” but adds that they nonetheless haven’t done enough, claiming they “have more work to do against hateful content.”

“The SPLC is not a fact-checker, it’s a hyper-partisan political attack dog which solely exists to demonize its ideological adversaries. There is no way to hold them accountable, they are accountable only to their own agenda and bias,” Watson said.

Twitter has reportedly dropped the SPLC as a reliable source for detecting hate content online and to police its platform, but other social media giants like Facebook, Google, and Amazon have continued to use the watchdog to decide what content should be kept on their sites.

Facebook has also recently come under fire from its co-founder Chris Hughes, who wrote an exclusive op-ed in The New York Times on Thursday calling for Facebook’s monopoly to be broken up, as its CEO Mark Zuckerberg has “unilateral control over free speech,” adding that his power is “unprecedented and un-American.”

“Personally, if and when I am banned on everything, I will probably just move into the background until the environment is once again fertile and if big enough alternative platforms exist which actually support free speech,” Watson added.

Facebook isn’t the only social media platform to face accusations of shutting down political speech it doesn’t like: In April, the company banned two conservative British candidates running for European Parliament, Tommy Robinson and Carl Benjamin, less than a month before the election.

The site banned Alex Jones and all Infowars accounts in September 2018.

Source: RT.com

Inside the bizarre world of internet trolls and propagandists | TEDTalks

Journalist Andrew Marantz spent three years embedded in the world of internet trolls and social media propagandists, seeking out the people who are propelling fringe talking points into the heart of conversation online and trying to understand how they’re making their ideas spread. Go down the rabbit hole of online propaganda and misinformation — and learn we can start to make the internet less toxic.

Source: TEDTalks

The Weaponization of Social Media | Counterpunch

troll-network_BF9747ECB468482488ECFF9A003635D6By Faisal Khan

The use of ‘bots’ present modern society with a significant dilemma; The technologies and social media platforms (such as Twitter and Facebook) that once promised to enhance democracy are now increasingly being used to undermine it. Writers Peter W Singer and Emerson Brooking believe ‘the rise of social media and the Internet has become a modern-day battlefield where information itself is weaponised’. To them ‘the online world is now just as indispensable to governments, militaries, activists, and spies at it is to advertisers and shoppers’. They argue this is a new form of warfare which they call ‘LikeWar’. The terrain of LikeWar is social media; ‘it’s platforms are not designed to reward morality or veracity but virality.’ The ‘system rewards clicks, interactions, engagement and immersion time…figure out how to make something go viral, and you can overwhelm even the truth itself.’

In its most simple form the word ‘bot’ is short for ‘robot’; beyond that, there is significant complexity. There are different types of bots. For example, there are ‘chatbots’ such as Siri and Amazon’s Alexa; they recognise human voice and speech and help us with our daily tasks and requests for information. There are search engine style ‘web bots’ and ‘spambots’. There are also ‘sockpuppets’ or ‘trolls’; these are often fake identities used to interact with ordinary users on social networks. There are ‘social bots’; these can assume a fabricated identity and can spread malicious links or advertisements. There are also ‘hybrid bots’ that combine automation with human input and are often referred to as ‘cyborgs’. Some bots are harmless; some more malicious, some can be both.

The country that is perhaps most advanced in this new form of warfare and political influence is Russia. According to Peter Singer and Emerson Brooking ‘Russian bots more than simply meddled in the 2016 U.S. presidential election…they used a mix of old-school information operations and new digital marketing techniques to spark real-world protests, steer multiple U.S. news cycles, and influence voters in one of the closest elections in modern history. Using solely online means, they infiltrated U.S. political communities so completely that flesh-and-blood American voters soon began to repeat scripts written in St. Petersburg and still think them their own’. Internationally, these ‘Russian information offensives have stirred anti-NATO sentiments in Germany by inventing atrocities out of thin air; laid the pretext for potential invasions of Estonia, Latvia, and Lithuania by fuelling the political antipathy of ethnic Russian minorities; and done the same for the very real invasion of Ukraine. And these are just the operations we know about.’

We witnessed similar influence operations here during the Brexit referendum in 2016. A study by the Financial Times reported that during the referendum campaign ‘the 20 most prolific accounts … displayed indications of high levels of automation’. The Anti-Muslim hate group TellMAMA recorded in its latest Annual report that manual bots based in St Petersburg were active in spreading Anti-Muslim hate online. Israel has also used manual ‘bots’ to promote a more positive image of itself online.

The Oxford Internet Institute (OII) has studied online political discussions relating to several countries on social media platforms such as Twitter and Facebook. It claims that in all the elections, political crises and national security-related discussions it examined, there was not one instance where social media opinion had not been manipulated by what they call ‘computational propaganda’. For them, while it remains difficult to quantify the impact bots have ‘computational propaganda’ is now one of the most ‘powerful tools against democracy’.

Donald Trump perhaps more than any other US President to date understands the power of social media. The OII found, for example, that although he alienated Latino voters on the campaign trail, he had some fake Latino twitter bots tweeting support for him. Emerson T Brooker informed me that social media bots can be highly-effective; for him ‘If a bot-driven conversation successfully enters the “Trending” charts of a service like Twitter, it can break into mainstream discussion and receive a great deal of attention from real flesh-and-blood users’. He continues ‘The first unequivocal use of political bots was in the 2010 Special Senate Election in Massachusetts, which ended in the election of Senator Scott Brown. The bots helped draw journalist (and donor) interest from across the country. The Islamic State was also a very effective user of botnets to spread its propaganda over Arabic-speaking Twitter. In 2014, it repeatedly drove hashtags related to its latest execution or battlefield victory (e.g. #AllEyesOnISIS) to international attention.’

So, what can be done to better regulate bots? The OII has called for social media platforms to act against bots and has suggested some steps. These include; making the posts they select for news feeds more ‘random’, so users don’t only see likeminded opinions. News feeds could be provided with a trustworthiness score; audits could be carried out of the algorithms they use to decide which posts to promote. However, the OII also cautions not to over-regulate the platforms to suppress political conversation altogether.  Marc Owen Jones of Exeter University who has researched bots feels that in the case of twitter better ‘verification procedures could tackle the bots’. According to Emerson Brooking ‘a simple non-invasive proposal bouncing around Congress now would mandate the labelling of bot accounts. This would allow bots positive automation functions to continue while keeping them from fooling everyday media users.’

Source: Counterpunch

How 5G will change (destroy) the world | World Economic Forum

large_m297nKs9NwkwFNcZIsHG-pB8DPtURngI4QodH4rR8Rc

Editor’s Note: The unforeseen consequences of unleashing an electronic network worldwide with nowhere to hide, with a bombardment of such powerful frequencies as to disrupt every living system with proven oxygen shattering and immune suppressing technology is beginning to unfold. This article is an industry puff piece for the global leaders of industry promoting 5G as the next panacea for all our problems. My friends, this is a crisis of consciousness and will forever map the trajectory of human evolution. Only robots will survive this 5G rollout. Read it and weep!

By Don Rosenberg

It is not an easy time to be an internationalist, to seek global solutions to global problems amid what feels like one of history’s periodic inclinations toward divisiveness.

Yet, ironically, we’re on the verge of a new age of interconnectedness, when the daily lives of people across the planet will be more closely intertwined than ever. Advances in technology will usher in the age of fifth generation, or 5G, telecommunications. And, if past is prologue, this technological evolution will lead to dramatic societal changes.

The first generation of mobile communications, with brick-sized phones, brought just a handful of users expensive and often unreliable analogue voice calling. The second generation introduced digital voice service that was less likely to be dropped, available to many more people and ultimately cheaper to use. 3G ushered in the mobile internet, mobile computing, and the proliferation of apps. 4G (often called LTE) made possible all we have come to expect of mobile broadband: streaming video and audio; instantaneous ride hailing; the explosion of social media.

We take all this connectivity for granted, but the engineering inside the device in your bag or pocket today would have seemed impossible less than 20 years ago.

So, where will 5G take us?

Think about a world in which not just people but all things are connected: cars to the roads they are on; doctors to the personal medical devices of their patients; augmented reality available to help people shop and learn and explore wherever they are. This requires a massive increase in the level of connectivity.

5G is the technological answer, making possible billions of new connections, and making those connections secure and instantaneous. 5G will impact every industry – autos, healthcare, manufacturing and distribution, emergency services, just to name a few. And 5G is purposely designed so that these industries can take advantage of cellular connectivity in ways that wouldn’t have been possible before, and to scale upwards as use of 5G expands.

But generational change in mobile communications doesn’t just appear overnight. It requires significant effort in research and development and the resources necessary to support that effort. Work on 4G took nearly a decade and the challenges were not easy. Consider one of tens of thousands of problems that needed to be solved as described by an engineer at Qualcomm, where much of this technology was invented:

“When the signal leaves the base station, it can undergo a loss of up to 130 decibels before it reaches your mobile phone. To put that loss into perspective, if you consider the transmitted signal power to be roughly the size of the Earth, then the received signal power would be equivalent to the size of a tiny bacteria.”

That is a tremendous loss of power, and it requires some pretty impressive engineering to compensate for the effect of the loss on the words, pictures, and other data we send and receive across the airwaves in a transparent, seamless and instantaneous way.

But we weren’t alone. The international engineering co-operation that goes into development of a telecom standard illustrates how much can be achieved when disparate national, commercial and scientific parties work together for the common good.

Like 3G and 4G, 5G is the responsibility of the standards-setting organisation 3GPP, where the handful of companies that invent technologies come together with many, many more companies who will develop products that implement those technologies.

Think about this process for a moment: engineers from rival inventing companies, rival product makers, rival wireless network operators, all from different countries and continents, discussing, testing, striving to perfect tens of thousands of different technical solutions that ultimately make up a standard like 5G.

They judge each technical solution using a merit-based, consensus-building approach. This process has been at the foundation of a technological revolution that spawned myriad new industries, millions of new jobs and well over a $1 trillion in economic growth.

It’s the fusion of commercial self-interest with the recognition that some problems are best solved by working together. And it’s not a bad model of human behaviour if we are to meet the World Economic Forum’s goal this year to address the problems of “a fractured world”.

The benefits and advantages of 5G technology are expected to be available sometime in 2019. We believe 5G will change the world even more profoundly than 3G and 4G; that it will be as revolutionary as electricity or the automobile, benefitting entire economies and entire societies.

Developing nations have rivalled or surpassed their industrialised counterparts in benefiting from the deployment of mobile technology, and there’s every reason to think 5G will have an even bigger levelling effect than its predecessors.

Economists estimate the global economic impact of 5G in new goods and services will reach $12 trillion by 2035 as 5G moves mobile technology from connecting people to people and information, towards connecting people to everything.

 

Many of the benefits probably aren’t yet apparent to us. Wireless network operators initially resisted proposals to give their customers mobile access to the internet, questioning why they would want it. At the dawn of 4G’s adoption no one could have predicted the new business models that grew on the back of mobile broadband, like Uber, Spotify and Facebook.

Now, according to the European Patent Office, the number of patent applications related to “smart connected objects” has surged 54% over the last three years, suggesting new, related and as-yet unknown inventions will arrive even before 5G becomes available.

This is news that should encourage us amid glum commentaries on the state of the world. There is promise yet in what we’re capable of achieving.

Source: World Economic Forum

Top 9 ethical issues in artificial intelligence | World Economic Forum

HM_JF19_Page_37_Image_0001_0By Julie Bossmann

Optimizing logistics, detecting fraud, composing art, conducting research, providing translations: intelligent machine systems are transforming our lives for the better. As these systems become more capable, our world becomes more efficient and consequently richer.

Tech giants such as Alphabet, Amazon, Facebook, IBM and Microsoft – as well as individuals like Stephen Hawking and Elon Musk – believe that now is the right time to talk about the nearly boundless landscape of artificial intelligence. In many ways, this is just as much a new frontier for ethics and risk assessment as it is for emerging technology. So which issues and conversations keep AI experts up at night?


1. Unemployment. What happens after the end of jobs?

The hierarchy of labour is concerned primarily with automation. As we’ve invented ways to automate jobs, we could create room for people to assume more complex roles, moving from the physical work that dominated the pre-industrial globe to the cognitive labour that characterizes strategic and administrative work in our globalized society.

Look at trucking: it currently employs millions of individuals in the United States alone. What will happen to them if the self-driving trucks promised by Tesla’s Elon Musk become widely available in the next decade? But on the other hand, if we consider the lower risk of accidents, self-driving trucks seem like an ethical choice. The same scenario could happen to office workers, as well as to the majority of the workforce in developed countries.

This is where we come to the question of how we are going to spend our time. Most people still rely on selling their time to have enough income to sustain themselves and their families. We can only hope that this opportunity will enable people to find meaning in non-labour activities, such as caring for their families, engaging with their communities and learning new ways to contribute to human society.

If we succeed with the transition, one day we might look back and think that it was barbaric that human beings were required to sell the majority of their waking time just to be able to live.

2. Inequality. How do we distribute the wealth created by machines?

Our economic system is based on compensation for contribution to the economy, often assessed using an hourly wage. The majority of companies are still dependent on hourly work when it comes to products and services. But by using artificial intelligence, a company can drastically cut down on relying on the human workforce, and this means that revenues will go to fewer people. Consequently, individuals who have ownership in AI-driven companies will make all the money.

We are already seeing a widening wealth gap, where start-up founders take home a large portion of the economic surplus they create. In 2014, roughly the same revenues were generated by the three biggest companies in Detroit and the three biggest companies in Silicon Valley … only in Silicon Valley there were 10 times fewer employees.

If we’re truly imagining a post-work society, how do we structure a fair post-labour economy?

3. Humanity. How do machines affect our behaviour and interaction?

Artificially intelligent bots are becoming better and better at modelling human conversation and relationships. In 2015, a bot named Eugene Goostman won the Turing Challenge for the first time. In this challenge, human raters used text input to chat with an unknown entity, then guessed whether they had been chatting with a human or a machine. Eugene Goostman fooled more than half of the human raters into thinking they had been talking to a human being.

This milestone is only the start of an age where we will frequently interact with machines as if they are humans; whether in customer service or sales. While humans are limited in the attention and kindness that they can expend on another person, artificial bots can channel virtually unlimited resources into building relationships.

Even though not many of us are aware of this, we are already witnesses to how machines can trigger the reward centres in the human brain. Just look at click-bait headlines and video games. These headlines are often optimized with A/B testing, a rudimentary form of algorithmic optimization for content to capture our attention. This and other methods are used to make numerous video and mobile games become addictive. Tech addiction is the new frontier of human dependency.

On the other hand, maybe we can think of a different use for software, which has already become effective at directing human attention and triggering certain actions. When used right, this could evolve into an opportunity to nudge society towards more beneficial behavior. However, in the wrong hands it could prove detrimental.

4. Artificial stupidity. How can we guard against mistakes?

Intelligence comes from learning, whether you’re human or machine. Systems usually have a training phase in which they “learn” to detect the right patterns and act according to their input. Once a system is fully trained, it can then go into test phase, where it is hit with more examples and we see how it performs.

Obviously, the training phase cannot cover all possible examples that a system may deal with in the real world. These systems can be fooled in ways that humans wouldn’t be. For example, random dot patterns can lead a machine to “see” things that aren’t there. If we rely on AI to bring us into a new world of labour, security and efficiency, we need to ensure that the machine performs as planned, and that people can’t overpower it to use it for their own ends.

5. Racist robots. How do we eliminate AI bias?

Though artificial intelligence is capable of a speed and capacity of processing that’s far beyond that of humans, it cannot always be trusted to be fair and neutral. Google and its parent company Alphabet are one of the leaders when it comes to artificial intelligence, as seen in Google’s Photos service, where AI is used to identify people, objects and scenes. But it can go wrong, such as when a camera missed the mark on racial sensitivity, or when asoftware used to predict future criminals showed bias against black people.

We shouldn’t forget that AI systems are created by humans, who can be biased and judgemental. Once again, if used right, or if used by those who strive for social progress, artificial intelligence can become a catalyst for positive change.

6. Security. How do we keep AI safe from adversaries?

The more powerful a technology becomes, the more can it be used for nefarious reasons as well as good. This applies not only to robots produced to replace human soldiers, or autonomous weapons, but to AI systems that can cause damage if used maliciously. Because these fights won’t be fought on the battleground only, cybersecurity will become even more important. After all, we’re dealing with a system that is faster and more capable than us by orders of magnitude.

Proliferation of Armed Drones

7. Evil genies. How do we protect against unintended consequences?

It’s not just adversaries we have to worry about. What if artificial intelligence itself turned against us? This doesn’t mean by turning “evil” in the way a human might, or the way AI disasters are depicted in Hollywood movies. Rather, we can imagine an advanced AI system as a “genie in a bottle” that can fulfill wishes, but with terrible unforeseen consequences.

In the case of a machine, there is unlikely to be malice at play, only a lack of understanding of the full context in which the wish was made. Imagine an AI system that is asked to eradicate cancer in the world. After a lot of computing, it spits out a formula that does, in fact, bring about the end of cancer – by killing everyone on the planet. The computer would have achieved its goal of “no more cancer” very efficiently, but not in the way humans intended it.

8. Singularity. How do we stay in control of a complex intelligent system?

The reason humans are on top of the food chain is not down to sharp teeth or strong muscles. Human dominance is almost entirely due to our ingenuity and intelligence. We can get the better of bigger, faster, stronger animals because we can create and use tools to control them: both physical tools such as cages and weapons, and cognitive tools like training and conditioning.

This poses a serious question about artificial intelligence: will it, one day, have the same advantage over us? We can’t rely on just “pulling the plug” either, because a sufficiently advanced machine may anticipate this move and defend itself. This is what some call the “singularity”: the point in time when human beings are no longer the most intelligent beings on earth.

9. Robot rights. How do we define the humane treatment of AI?

While neuroscientists are still working on unlocking the secrets of conscious experience, we understand more about the basic mechanisms of reward and aversion. We share these mechanisms with even simple animals. In a way, we are building similar mechanisms of reward and aversion in systems of artificial intelligence. For example, reinforcement learning is similar to training a dog: improved performance is reinforced with a virtual reward.

Right now, these systems are fairly superficial, but they are becoming more complex and life-like. Could we consider a system to be suffering when its reward functions give it negative input? What’s more, so-called genetic algorithms work by creating many instances of a system at once, of which only the most successful “survive” and combine to form the next generation of instances. This happens over many generations and is a way of improving a system. The unsuccessful instances are deleted. At what point might we consider genetic algorithms a form of mass murder?

Once we consider machines as entities that can perceive, feel and act, it’s not a huge leap to ponder their legal status. Should they be treated like animals of comparable intelligence? Will we consider the suffering of “feeling” machines?

Some ethical questions are about mitigating suffering, some about risking negative outcomes. While we consider these risks, we should also keep in mind that, on the whole, this technological progress means better lives for everyone. Artificial intelligence has vast potential, and its responsible implementation is up to us.

Source: World Economic Forum

NSA reportedly planted spyware on electronics equipment | CNET

By Dan Farber

NSAA new report from Der Spiegel, based on internal National Security Agency documents, reveals more details about how the spy agency gains access to computers and other electronic devices to plant backdoors and other spyware.

The Office of Tailored Access Operations, or TAO, is described as a “squad of digital plumbers” that deals with hard targets — systems that are not easy to infiltrate. TAO has reportedly been responsible for accessing the protected networks of heads of state worldwide, works with the CIA and FBI to undertake “sensitive missions,” and has penetrated the security of undersea fiber-optic cables. TAO also intercepts deliveries of electronic equipment to plant spyware to gain remote access to the systems once they are delivered and installed.

Der Spiegel: Inside TAO -Documents Reveal Top NSA Hacking Unit

Der Spiegel: Shopping for Spy Gear – Catalog Advertises NSA Toolbox

According to the report, the NSA has planted backdoors to access computers, hard drives, routers, and other devices from companies such as Cisco, Dell, Western Digital, Seagate, Maxtor, Samsung, and Huawei. The report describes a 50-page product catalog of tools and techniques that an NSA division called ANT, which stands for Advanced or Access Network Technology, uses to gain access to devices.

This follows a report that the security firm RSA intentionally allowed the NSA to create a backdoor into its encryption tokens.

“For nearly every lock, ANT seems to have a key in its toolbox. And no matter what walls companies erect, the NSA’s specialists seem already to have gotten past them,” the report said. The ANT department prefers targeting the BIOS, code on a chip on the motherboard that runs when the machine starts up. The spyware infiltration is largely invisible to other security programs and can persist if a machine is wiped and a new operating system is installed.

With the exception of Dell, the companies cited in the report and contacted by Der Spiegel claimed they had no knowledge of any NSA backdoors into their equipment.

In a blog post Sunday, a Cisco spokesperson wrote:

At this time, we do not know of any new product vulnerabilities, and will continue to pursue all avenues to determine if we need to address any new issues. If we learn of a security weakness in any of our products, we will immediately address it. As we have stated prior, and communicated to Der Spiegel, we do not work with any government to weaken our products for exploitation, nor to implement any so-called security ‘back doors’ in our products.

The NSA declined to comment on the report but said the TAO was key for national defense.

“Tailored Access Operations (TAO) is a unique national asset that is on the front lines of enabling NSA to defend the nation and its allies,” the agency said in a statement. “We won’t discuss specific allegations regarding TAO’s mission, but its work is centered on computer network exploitation in support of foreign intelligence collection.”

The end does not appear to be in sight for the revelations from the documents obtained by Edward Snowden, according to Glenn Greenwald, the journalist who first collaborated with Snowden to publish the material. In a speech delivered by video to the Chaos Communication Congress (CCC) in Hamburg on Friday, he said, “There are a lot more stories to come, a lot more documents that will be covered. It’s important that we understand what it is we’re publishing, so what we say about them is accurate.”

Source: CNET