The Truth Behind Coronavirus Pandemic: COVID-19 Lockdown & The Economic Crash | London Real

 

https://youtu.be/gMTZu6_TjU8

Johnny Liberty, Editor’s Notes: Why Destroy the Global Economic System During the Coronavirus Crisis (and replace it with another that better serves the Global Power Structure)?

  • Global Bankers will pull trillions of dollars of equity out of the economic system (stocks, real estate values, business values) to rollback the debt obligations of the international bankers (because of the nature of the debt-based monetary system)
  • Total Economic Shutdown will destroy millions of family businesses, small businesses and medium sized businesses leaving a larger share to the big, corporate players.
  • Total Economic Shutdown will drop millions of people in countries around the world to the very bottom of the economic ladder. This is economic suicide for We the People and a few thousand dollars of government subsidies will not restore even a fraction of the losses.
  • Don’t believe what you’re being told because somebody in a suit said so. Question authority and do your own independent research.
  • The Coronavirus crisis is not about public health or caring for the elderly.
  • The Global Power Structure doesn’t care one iota about the health and welfare of the elderly. Protecting the elderly is a convenient excuse to destroy the existing economic order and impose a New World Order run by an elite few at the top of the pyramid.
  • Who benefits? The Global Power Structure, or 1%, will be the one who will benefit.
  • What is the result? To impose a Global Technocracy run by experts, scientists, technocrats, politicians and artificial intelligence under their total control. Your every move will be watched and tracked. You will be plugged into “The Matrix”.
  • For this you can thank the Coronavirus panic and overreaction.

Source: London Real & YouTube

The Weaponization of Social Media | Counterpunch

troll-network_BF9747ECB468482488ECFF9A003635D6By Faisal Khan

The use of ‘bots’ present modern society with a significant dilemma; The technologies and social media platforms (such as Twitter and Facebook) that once promised to enhance democracy are now increasingly being used to undermine it. Writers Peter W Singer and Emerson Brooking believe ‘the rise of social media and the Internet has become a modern-day battlefield where information itself is weaponised’. To them ‘the online world is now just as indispensable to governments, militaries, activists, and spies at it is to advertisers and shoppers’. They argue this is a new form of warfare which they call ‘LikeWar’. The terrain of LikeWar is social media; ‘it’s platforms are not designed to reward morality or veracity but virality.’ The ‘system rewards clicks, interactions, engagement and immersion time…figure out how to make something go viral, and you can overwhelm even the truth itself.’

In its most simple form the word ‘bot’ is short for ‘robot’; beyond that, there is significant complexity. There are different types of bots. For example, there are ‘chatbots’ such as Siri and Amazon’s Alexa; they recognise human voice and speech and help us with our daily tasks and requests for information. There are search engine style ‘web bots’ and ‘spambots’. There are also ‘sockpuppets’ or ‘trolls’; these are often fake identities used to interact with ordinary users on social networks. There are ‘social bots’; these can assume a fabricated identity and can spread malicious links or advertisements. There are also ‘hybrid bots’ that combine automation with human input and are often referred to as ‘cyborgs’. Some bots are harmless; some more malicious, some can be both.

The country that is perhaps most advanced in this new form of warfare and political influence is Russia. According to Peter Singer and Emerson Brooking ‘Russian bots more than simply meddled in the 2016 U.S. presidential election…they used a mix of old-school information operations and new digital marketing techniques to spark real-world protests, steer multiple U.S. news cycles, and influence voters in one of the closest elections in modern history. Using solely online means, they infiltrated U.S. political communities so completely that flesh-and-blood American voters soon began to repeat scripts written in St. Petersburg and still think them their own’. Internationally, these ‘Russian information offensives have stirred anti-NATO sentiments in Germany by inventing atrocities out of thin air; laid the pretext for potential invasions of Estonia, Latvia, and Lithuania by fuelling the political antipathy of ethnic Russian minorities; and done the same for the very real invasion of Ukraine. And these are just the operations we know about.’

We witnessed similar influence operations here during the Brexit referendum in 2016. A study by the Financial Times reported that during the referendum campaign ‘the 20 most prolific accounts … displayed indications of high levels of automation’. The Anti-Muslim hate group TellMAMA recorded in its latest Annual report that manual bots based in St Petersburg were active in spreading Anti-Muslim hate online. Israel has also used manual ‘bots’ to promote a more positive image of itself online.

The Oxford Internet Institute (OII) has studied online political discussions relating to several countries on social media platforms such as Twitter and Facebook. It claims that in all the elections, political crises and national security-related discussions it examined, there was not one instance where social media opinion had not been manipulated by what they call ‘computational propaganda’. For them, while it remains difficult to quantify the impact bots have ‘computational propaganda’ is now one of the most ‘powerful tools against democracy’.

Donald Trump perhaps more than any other US President to date understands the power of social media. The OII found, for example, that although he alienated Latino voters on the campaign trail, he had some fake Latino twitter bots tweeting support for him. Emerson T Brooker informed me that social media bots can be highly-effective; for him ‘If a bot-driven conversation successfully enters the “Trending” charts of a service like Twitter, it can break into mainstream discussion and receive a great deal of attention from real flesh-and-blood users’. He continues ‘The first unequivocal use of political bots was in the 2010 Special Senate Election in Massachusetts, which ended in the election of Senator Scott Brown. The bots helped draw journalist (and donor) interest from across the country. The Islamic State was also a very effective user of botnets to spread its propaganda over Arabic-speaking Twitter. In 2014, it repeatedly drove hashtags related to its latest execution or battlefield victory (e.g. #AllEyesOnISIS) to international attention.’

So, what can be done to better regulate bots? The OII has called for social media platforms to act against bots and has suggested some steps. These include; making the posts they select for news feeds more ‘random’, so users don’t only see likeminded opinions. News feeds could be provided with a trustworthiness score; audits could be carried out of the algorithms they use to decide which posts to promote. However, the OII also cautions not to over-regulate the platforms to suppress political conversation altogether.  Marc Owen Jones of Exeter University who has researched bots feels that in the case of twitter better ‘verification procedures could tackle the bots’. According to Emerson Brooking ‘a simple non-invasive proposal bouncing around Congress now would mandate the labelling of bot accounts. This would allow bots positive automation functions to continue while keeping them from fooling everyday media users.’

Source: Counterpunch

Top 9 ethical issues in artificial intelligence | World Economic Forum

HM_JF19_Page_37_Image_0001_0By Julie Bossmann

Optimizing logistics, detecting fraud, composing art, conducting research, providing translations: intelligent machine systems are transforming our lives for the better. As these systems become more capable, our world becomes more efficient and consequently richer.

Tech giants such as Alphabet, Amazon, Facebook, IBM and Microsoft – as well as individuals like Stephen Hawking and Elon Musk – believe that now is the right time to talk about the nearly boundless landscape of artificial intelligence. In many ways, this is just as much a new frontier for ethics and risk assessment as it is for emerging technology. So which issues and conversations keep AI experts up at night?


1. Unemployment. What happens after the end of jobs?

The hierarchy of labour is concerned primarily with automation. As we’ve invented ways to automate jobs, we could create room for people to assume more complex roles, moving from the physical work that dominated the pre-industrial globe to the cognitive labour that characterizes strategic and administrative work in our globalized society.

Look at trucking: it currently employs millions of individuals in the United States alone. What will happen to them if the self-driving trucks promised by Tesla’s Elon Musk become widely available in the next decade? But on the other hand, if we consider the lower risk of accidents, self-driving trucks seem like an ethical choice. The same scenario could happen to office workers, as well as to the majority of the workforce in developed countries.

This is where we come to the question of how we are going to spend our time. Most people still rely on selling their time to have enough income to sustain themselves and their families. We can only hope that this opportunity will enable people to find meaning in non-labour activities, such as caring for their families, engaging with their communities and learning new ways to contribute to human society.

If we succeed with the transition, one day we might look back and think that it was barbaric that human beings were required to sell the majority of their waking time just to be able to live.

2. Inequality. How do we distribute the wealth created by machines?

Our economic system is based on compensation for contribution to the economy, often assessed using an hourly wage. The majority of companies are still dependent on hourly work when it comes to products and services. But by using artificial intelligence, a company can drastically cut down on relying on the human workforce, and this means that revenues will go to fewer people. Consequently, individuals who have ownership in AI-driven companies will make all the money.

We are already seeing a widening wealth gap, where start-up founders take home a large portion of the economic surplus they create. In 2014, roughly the same revenues were generated by the three biggest companies in Detroit and the three biggest companies in Silicon Valley … only in Silicon Valley there were 10 times fewer employees.

If we’re truly imagining a post-work society, how do we structure a fair post-labour economy?

3. Humanity. How do machines affect our behaviour and interaction?

Artificially intelligent bots are becoming better and better at modelling human conversation and relationships. In 2015, a bot named Eugene Goostman won the Turing Challenge for the first time. In this challenge, human raters used text input to chat with an unknown entity, then guessed whether they had been chatting with a human or a machine. Eugene Goostman fooled more than half of the human raters into thinking they had been talking to a human being.

This milestone is only the start of an age where we will frequently interact with machines as if they are humans; whether in customer service or sales. While humans are limited in the attention and kindness that they can expend on another person, artificial bots can channel virtually unlimited resources into building relationships.

Even though not many of us are aware of this, we are already witnesses to how machines can trigger the reward centres in the human brain. Just look at click-bait headlines and video games. These headlines are often optimized with A/B testing, a rudimentary form of algorithmic optimization for content to capture our attention. This and other methods are used to make numerous video and mobile games become addictive. Tech addiction is the new frontier of human dependency.

On the other hand, maybe we can think of a different use for software, which has already become effective at directing human attention and triggering certain actions. When used right, this could evolve into an opportunity to nudge society towards more beneficial behavior. However, in the wrong hands it could prove detrimental.

4. Artificial stupidity. How can we guard against mistakes?

Intelligence comes from learning, whether you’re human or machine. Systems usually have a training phase in which they “learn” to detect the right patterns and act according to their input. Once a system is fully trained, it can then go into test phase, where it is hit with more examples and we see how it performs.

Obviously, the training phase cannot cover all possible examples that a system may deal with in the real world. These systems can be fooled in ways that humans wouldn’t be. For example, random dot patterns can lead a machine to “see” things that aren’t there. If we rely on AI to bring us into a new world of labour, security and efficiency, we need to ensure that the machine performs as planned, and that people can’t overpower it to use it for their own ends.

5. Racist robots. How do we eliminate AI bias?

Though artificial intelligence is capable of a speed and capacity of processing that’s far beyond that of humans, it cannot always be trusted to be fair and neutral. Google and its parent company Alphabet are one of the leaders when it comes to artificial intelligence, as seen in Google’s Photos service, where AI is used to identify people, objects and scenes. But it can go wrong, such as when a camera missed the mark on racial sensitivity, or when asoftware used to predict future criminals showed bias against black people.

We shouldn’t forget that AI systems are created by humans, who can be biased and judgemental. Once again, if used right, or if used by those who strive for social progress, artificial intelligence can become a catalyst for positive change.

6. Security. How do we keep AI safe from adversaries?

The more powerful a technology becomes, the more can it be used for nefarious reasons as well as good. This applies not only to robots produced to replace human soldiers, or autonomous weapons, but to AI systems that can cause damage if used maliciously. Because these fights won’t be fought on the battleground only, cybersecurity will become even more important. After all, we’re dealing with a system that is faster and more capable than us by orders of magnitude.

Proliferation of Armed Drones

7. Evil genies. How do we protect against unintended consequences?

It’s not just adversaries we have to worry about. What if artificial intelligence itself turned against us? This doesn’t mean by turning “evil” in the way a human might, or the way AI disasters are depicted in Hollywood movies. Rather, we can imagine an advanced AI system as a “genie in a bottle” that can fulfill wishes, but with terrible unforeseen consequences.

In the case of a machine, there is unlikely to be malice at play, only a lack of understanding of the full context in which the wish was made. Imagine an AI system that is asked to eradicate cancer in the world. After a lot of computing, it spits out a formula that does, in fact, bring about the end of cancer – by killing everyone on the planet. The computer would have achieved its goal of “no more cancer” very efficiently, but not in the way humans intended it.

8. Singularity. How do we stay in control of a complex intelligent system?

The reason humans are on top of the food chain is not down to sharp teeth or strong muscles. Human dominance is almost entirely due to our ingenuity and intelligence. We can get the better of bigger, faster, stronger animals because we can create and use tools to control them: both physical tools such as cages and weapons, and cognitive tools like training and conditioning.

This poses a serious question about artificial intelligence: will it, one day, have the same advantage over us? We can’t rely on just “pulling the plug” either, because a sufficiently advanced machine may anticipate this move and defend itself. This is what some call the “singularity”: the point in time when human beings are no longer the most intelligent beings on earth.

9. Robot rights. How do we define the humane treatment of AI?

While neuroscientists are still working on unlocking the secrets of conscious experience, we understand more about the basic mechanisms of reward and aversion. We share these mechanisms with even simple animals. In a way, we are building similar mechanisms of reward and aversion in systems of artificial intelligence. For example, reinforcement learning is similar to training a dog: improved performance is reinforced with a virtual reward.

Right now, these systems are fairly superficial, but they are becoming more complex and life-like. Could we consider a system to be suffering when its reward functions give it negative input? What’s more, so-called genetic algorithms work by creating many instances of a system at once, of which only the most successful “survive” and combine to form the next generation of instances. This happens over many generations and is a way of improving a system. The unsuccessful instances are deleted. At what point might we consider genetic algorithms a form of mass murder?

Once we consider machines as entities that can perceive, feel and act, it’s not a huge leap to ponder their legal status. Should they be treated like animals of comparable intelligence? Will we consider the suffering of “feeling” machines?

Some ethical questions are about mitigating suffering, some about risking negative outcomes. While we consider these risks, we should also keep in mind that, on the whole, this technological progress means better lives for everyone. Artificial intelligence has vast potential, and its responsible implementation is up to us.

Source: World Economic Forum