The Internet of Things is what that makes your life easier! Your phone exactly is one of the many results of the Internet of Things. So, what exactly the Internet of Things (IoT) is all about? Let’s know further about it!
Maybe some of you are already familiar with these words, the Internet of Things or some others may not know about it? but really, we are living with the Internet of Things closer than we know.
Let’s see everything becomes much easier nowadays. You don’t have to go to the supermarket to buy things if you could shop online; you are now could bring less and less cash and pay things online instead; wi-fi is all around the places wherever you go. That alone could give you at least a picture about IoT.
Then, what is the Internet of Things (IoT)?
Internet of things is about the concept of the system that could transfer data through the network without requiring human-to-human or human-to-computer interaction. This system itself basically is about connecting everything in the world, not only people but also things around us, from people-to-people, people-to-things, and things-to-things.
To put it simply, according to McClelland in his website, iotforall.com:
the Internet of Things means taking all the things in the world and connecting them to the internet.
He also tells there are three categories from things that are being connected to the internet. First, things that could collect information and send it. Second, things that could receive information and act according to the received information. Third, things could do both of the commands before.
Moreover, the word “things” in the Internet of Things is about the subject that has equipped with IoT devices. Takes examples for a house with an anti-theft sensor, people with e-wallet for payment online, wireless speaker that you used at home or even Remote Patient Monitoring (RPM) for the patient to remotely controlled from home.
IoT for our lives
Nowadays, people already rely on the Internet of Things for daily life. Just say that the way you use your phone is part of it. Even the way you reach this website is part of the Internet of Things too. The way IoT influences our lives is how our lives become much easier.
You can touch and swipe your phone to get more convenient transportation. E-wallet and barcode to pay things are also popular nowadays. You can even make your own fitness schedule with your coach online. See? All of those is thanks to the concept of the Internet of Things.
There are many predictions out there about how IoT could influence societies in the world. They even said that there will be over 10 billion connected devices in 2020. Who knows? More importantly, IoT could help human life and indeed brings a lot of benefits for our daily life. The using of IoT could lead to smart people with smart devices and this can lead to a smart society as well.
However, IoT also could push people to do things that could give harm to others. So, in the end, everything is all up to you to use it wisely and smart. At the end of the day, all choices are in your hand.
Google has launched its latest flagship phones, Pixel 4 and 4XL. Although the new models feature relatively marginal improvements to their predecessors, the launch was staged with much fanfare by Google, as if it represented a major breakthrough for the company and the smartphone market – despite most of the product specs being leaked before the event. The launch was just the latest in a series of product launches by leading digital tech companies that sharply overstated recent innovations.
On September 10, for instance, Apple introduced three new iPhones, revamped Apple Watches and two new subscriptions services, TV+ and Apple Arcade. Two weeks later, Amazon presented a long list of new gadgets at its Alexa event. All these launches have something in common: the “novelties” they introduce are merely iterations of their existing product offering, yet they are presented as revolutionary.
Exaggeration does not come as a surprise in marketing and advertisement. Yet digital corporations pursue a precise strategy with their product launches. The main goal of these events is not so much introducing specific gadgets. It is to position these companies at the centre of the aura that the so-called digital revolution has acquired for billions of users – and customers – around the world.
Launching new technology devices through public events predates Silicon Valley. Alexander Graham Bell and Guglielmo Marconi, two of the most popular inventors and entrepreneurs in the late 19th and early 20th century, organised events to present the telephone and wireless telegraphy.
The audience at these events were mainly scientists or technical experts, but they were also attended by politicians, entrepreneurs, and even kings and queens. The celebrated American inventor Thomas Edison went one step further, presenting his new products in public events such as international exhibitions and tech fairs.
Like today, launches of new products helped shape public opinion and to make a name for companies such as AT&T, Marconi and Edison. They were even used to fight commercial wars. At the end of the 19th century Edison launched a campaign of public events to promote his direct current standard against the rival alternating current. He even electrocuted animals (like the elephant Topsy) in front of journalists to demonstrate that the other standard was dangerous.
More recently, Steve Jobs followed the footsteps of these inventor-entrepreneurs and codified a “genre” – the so-called keynote. Alone on stage and wearing roll neck and jeans (an informal “uniform” for geeks), Jobs launched several Apple products in front of audiences of tech-enthusiasts. These events helped build the myth of Steve Jobs and Apple.
What product launches are really about
Jobs’ talent was more in the marketing and promoting of new devices than in developing technology. Since the 1980s, Apple’s founder recognised the power of a new vision surrounding digital technologies. This vision saw the personal computer and later the internet as harbingers of a new era.
It was a powerful cultural myth centred around the idea that we are experiencing a digital “revolution”, a concept traditionally associated with political change that now came to describe the impact of new technology. In this context, Jobs carefully staged his launches in order to present Apple as the embodiment of this myth.
Take, for instance, Apple’s famous 2007 iPhone launch. Jobs started his talk arguing that “every once in a while, a revolutionary product comes along that changes everything”. His examples included key moments from Apple’s corporate history: the Macintosh reinvented “the entire computer industry” in 1984, the iPod changed the “entire music industry” in 2001, and the iPhone was about to “reinvent the phone”.
This is a narrow account of technological change, to say the least. Believing that one single device brought about a digital revolution is like seeing a crowd of people in Times Square and assuming they turned up because you broadcast on WhatsApp that everyone should go there. It is, however, a convenient point of view for huge corporations such as Apple or Google. To keep their position in the digital market, these companies not only need to design sophisticated hardware and software, they also need to nurture the myth that we live in a state of incessant revolution of which they are the key engine.
In our research, we call this myth “corporational determinism” because like other forms of determinism, it poses the idea that one single agent is responsible for all changes. The way that digital media companies like Amazon, Apple, Facebook and Google communicate to the public is largely an attempt to propagandise this myth.
So you should not be worried if Google’s latest launch did blow you away. The key function of product launches is not actually to launch products. It is for companies to present themselves as the smartest agents in contemporary society, the protagonists of technological change and, ultimately, the heroes of the digital revolution.
Digital platforms, the websites and apps which compete for our precious screen time, have successfully invaded the traditional territory of many sectors of the “old economy”. They have become the preferred – expected, even – domains for many kinds of human behaviour, from banking and property buying, to dating and entertainment.
In doing so, Airbnb, Amazon, Uber and (many) others have swiftly managed to shift economic behaviour from the world of physical bricks and mortar to a digital world powered by algorithms.
These companies are often praised for apparently providing consumers with ever more choice. But in fact, the fundamental idea behind the algorithms which power these platforms is to reduce the variation of options available.
This is because digital platforms are meticulously designed to appeal to individual users at both ends – sellers or providers, and consumers or users. In theory, this reduces the complexity of decision-making, and increases the speed of digital interaction.
Yet in many regards, digital technology has simply made things more complicated. And there are three main ways in which they have managed to do this.
First, while the boundaries between physical and digital space have become blurred, so too has the distinction between producer and consumer.
This is because social media platforms have given consumers a new and stronger voice. Likes, shares, dislikes, comments and reviews all provide information that was not available in a pre-digital age.
This voice informs both well-known brands and start-up entrepreneurs about how their products are being perceived. Consumers become part of the marketing operation in a way that was not possible before, complicating the way we value products and services.
Second, the ways in which business initiatives attract funding has also altered considerably. Specifically, crowdfunding has become a popular way of raising finance for new ideas or projects, attracting donations through collaborative contributions. And recent analysis suggests that crowdfunding is fuelling a wide array of ideas that go well beyond what would be possible in the context of traditional funding (from banks or wealthy investors).
As new business ventures gain funding and momentum more easily through the digital landscape, they increase the overall complexity of the marketplace. The speed (and scope and scale) at which markets are redefined is accelerated by entrepreneurs who create new offerings.
Third, the digital media landscape has given rise to a plethora of platforms enabled by information and communications technology for the exchange of goods and services. Specifically, the “sharing”, “access” and “community-based” economies represent new ways in which exchanges of goods and services take place on platforms such as Airbnb, Uber and Couchsurfing.
The sharing economy, however, has recently been shown to be expanding into various new sectors including fashion and sports, adding complexity by going beyond the previously dominant sectors of transportation and accommodation.
In light of all these rapid developments, which change the conventional view of what a market-based economy is, there are several serious challenges facing society.
A simply complex situation
These concern how we all – consumers, producers, investors – manage communication, privacy and cyber security. Given the nature of the algorithmic world, voices are increasingly raised about the risks of artificial intelligence (AI) for humankind.
But before we even reach this level, the risks are great for the idea of human liberal thought, when the ways in which we are being persuaded are unclear to so many of us.
Consumers, firms and policymakers are already feeding AI-enabled online robots with ever more information aimed at improving automated digital solutions for everyday decisions, issues and concerns.
Can we balance the value generated from such digital platforms with the potential risks? Probably. But concerted action from governments and businesses is needed to enhance transparency about the risks of algorithmic solutions and decisions. That’s the only way we can all be expected to understand this brave new digital world.
YouTube’s video recommendation system, in particular, has been criticised for radicalising young people and steering viewers down rabbit holes of disturbing content.
The company claims it is trying to avoid amplifying problematic content. But research from YouTube’s parent company, Google, indicates this is far from straightforward, given the commercial pressure to keep users engaged via ever more stimulating content.
But how do YouTube’s recommendation algorithms actually work? And how much are they really to blame for the problems of radicalisation?
The fetishisation of algorithms
Almost everything we see online is heavily curated. Algorithms decide what to show us in Google’s search results, Apple News, Twitter trends, Netflix recommendations, Facebook’s newsfeed, and even pre-sorted or spam-filtered emails. And that’s before you get to advertising.
More often than not, these systems decide what to show us based on their idea of what we are like. They also use information such as what our friends are doing and what content is newest, as well as built-in randomness. All this makes it hard to reverse-engineer algorithmic outcomes to see how they came about.
Algorithms take all the relevant data they have and process it to achieve a goal – often one that involves influencing users’ behaviour, such as selling us products or keeping us engaged with an app or website.
At YouTube, the “up next” feature is the one that receives most attention, but other algorithms are just as important, including search result rankings, homepage video recommendations, and trending video lists.
How YouTube recommends content
The main goal of the YouTube recommendation system is to keep us watching. And the system works: it is responsible for more than 70% of the time users spend watching videos.
When a user watches a video on YouTube, the “up next” sidebar shows videos that are related but usually longer and more popular. These videos are ranked according to the user’s history and context, and newer videos are generally preferenced.
This is where we run into trouble. If more watching time is the central objective, the recommendation algorithm will tend to favour videos that are new, engaging and provocative.
Yet algorithms are just pieces of the vast and complex sociotechnical system that is YouTube, and there is so far little empirical evidence on their role in processes of radicalisation.
In fact, recent research suggests that instead of thinking about algorithms alone, we should look at how they interact with community behaviour to determine what users see.
The importance of communities on YouTube
YouTube is a quasi-public space containing all kinds of videos: from musical clips, TV shows and films, to vernacular genres such as “how to” tutorials, parodies, and compilations. User communities that create their own videos and use the site as a social network have played an important role on YouTube since its beginning.
Today, these communities exist alongside commercial creators who use the platform to build personal brands. Some of these are far-right figures who have found in YouTube a home to push their agendas.
It is unlikely that algorithms alone are to blame for the radicalisation of a previously “moderate audience” on YouTube. Instead, research suggests these radicalised audiences existed all along.
Right-wing content creators also know YouTube’s policies well. Their videos are often “borderline” content: they can be interpreted in different ways by different viewers.
YouTube’s community guidelines restrict blatantly harmful content such as hate speech and violence. But it’s much harder to police content in the grey areas between jokes and bullying, religious doctrine and hate speech, or sarcasm and a call to arms.
Moving forward: a cultural shift
There is no magical technical solution to political radicalisation. YouTube is working to minimise the spread of borderline problematic content (for example, conspiracy theories) by reducing their recommendations of videos that can potentially misinform users.
However, YouTube is a company and it’s out to make a profit. It will always prioritise its commercial interests. We should be wary of relying on technological fixes by private companies to solve society’s problems. Plus, quick responses to “fix” these issues might also introduce harms to politically edgy (activists) and minority (such as sexuality-related or LGBTQ) communities.
When we try to understand YouTube, we should take into account the different factors involved in algorithmic outcomes. This includes systematic, long-term analysis of what algorithms do, but also how they combine with YouTube’s prominent subcultures, their role in political polarisation, and their tactics for managing visibility on the platform.
Before YouTube can implement adequate measures to minimise the spread of harmful content, it must first understand what cultural norms are thriving on their site – and being amplified by their algorithms.
The authors would like to acknowledge that the ideas presented in this article are the result of ongoing collaborative research on YouTube with researchers Jean Burgess, Nicolas Suzor, Bernhard Rieder, and Oscar Coromina.
Amazon has recently developed an option whereby Alexa will only activate if people address it with a “please”. This suggests that we are starting to recognise some intelligent machines in a way that was previously reserved only for humans. In fact, this could very well be the first step towards recognising the rights of machines.
Machines are becoming a part of the fabric of everyday life. Whether it be the complex technology that we are embedding inside of us, or the machines on the outside, the line between what it means to be human and machine is softening. As machines get more and more intelligent, it is imperative that we begin discussing whether it will soon be time to recognise the rights of robots, as much for our sake as for theirs.
When someone says that they have a “right” to something, they are usually saying that they have a claim or an expectation that something should be a certain way. But what is just as important as rights are the foundations on which they are based. Rights rely on various intricate frameworks, such as law and morality. Sometimes, the frameworks may not be clear cut. For instance, in human rights law, strong moral values such as dignity and equality inform legal rights.
So rights are often founded upon human principles. This helps partially explain why we have recognised the rights of animals. We recognise that it is ethically wrong to torture or starve animals, so we create laws against it. As intelligent machines weave further into our lives, there is a good chance that our human principles will also force us to recognise that they too deserve rights.
But you might argue that animals differ from machines in that they have some sort of conscious experience. And it is true that consciousness and subjective experience are important, particularly to human rights. Article 1 of the Universal Declaration of Human Rights 1948, for example, says all human beings “are endowed with reason and conscience and should act towards one another in a spirit of brotherhood”.
However, consciousness and human rights are not the only basis of rights. In New Zealand and Ecuador, rivers have been granted rights because humans deemed their very existence to be important. So rights don’t emerge only from consciousness, they can extend from other criteria also. There is no one correct type or form of rights. Human rights are not the only rights.
As machines become even more complex and intelligent, just discarding or destroying them without asking any questions at all about their moral and physical integrity seems ethically wrong. Just like rivers, they too should receive rights because of their meaning to us.
What if there was a complex and independent machine providing health care to a human over a long period of time. The machine resembled a person and applied intelligence through natural speech. Over time, the machine and the patient built up a close relationship. Then, after a long period of service, the company that creates the machine decides that it is time to turn off and discard this perfectly working machine. It seems ethically wrong to simply discard this intelligent machine, which has kept alive and built a relationship with that patient, without even entertaining its right to integrity and other rights.
This might seem absurd, but imagine for a second that it is you who has built a deep and meaningful relationship with this intelligent machine. Wouldn’t you be desperately finding a way to stop it being turned off and your relationship being lost? It is as much for our own human sake, than for the sake of intelligent machines, that we ought to recognise the rights of intelligent machines.
Sexbots are a good example. The UK’s sexual offences law exists to protect the sexual autonomy of the human victim. But it also exists to ensure that people respect sexual autonomy, the right of a person to control their own body and their own sexual activity, as a value.
But the definition of consent in section 74 of the Sexual Offences Act 2003 in the UK specifically refers to “persons” and not machines. So right now a person can do whatever they wish to a sexbot, including torture. There is something troubling about this. And it is not because we believe sexbots to have consciousness. Instead, it is probably because by allowing people to torture robots, the law stops ensuring that people respect the values of personal and sexual autonomy, that we consider important.
These examples very much show that there is a discussion to be had over the rights of intelligent machines. And as we rapidly enter an age where these examples will no longer be hypothetical, the law must keep up.
Matter of respect
We are already recognising complex machines in a manner that was previously reserved only for humans and animals. We feel that our children must be polite to Alexa as, if they are not, it will damage our own notions of respect and dignity. Unconsciously we are already recognising that how we communicate with and respect intelligent machines will affect how we communicate with and respect humans. If we don’t extend recognition to intelligent machines, then it will affect how we treat and consider humans.
Machines are integrating their way in to our world. Google’s recent experiment with natural language assistants, where AI sounded eerily like a human, gave us an insight into this future. One day, it may become impossible to tell whether we are interacting with machines or with humans. When that day comes, rights may have to change to include them as well. As we change, rights may naturally have to adapt too.