From robo calls to spam texts: annoying campaign tricks that are legal

From robo calls to spam texts: annoying campaign tricks that are legal

 

File 20190116 152968 yjzfxg.jpg?ixlib=rb 1.1

Politicians are allowed to spam you with campaign texts.
from shutterstock.com

 

Graeme Orr, The University of Queensland

“Make Australia Great.” So began several million text messages, sent last week from Clive Palmer’s United Australia Party. Palmer’s bumptious campaign techniques actually predated those of Donald Trump.

But now he is aping Trump’s slogans and nationalism, if with a less reactionary, more third-way ethos. The chances of Palmer rising again, like the proverbial political soufflé, are remote. But what of his campaign methods?

Mass texting (I’ll dub it “mexting”) is nothing new in electoral politics. Fifteen years ago it proved controversial, during a local election on the Gold Coast. Late night texts were sent to target young voters while they were out on the town.

The message – which came from nightclubs, urging voters to keeping licensed venues open all hours – was lost in a backlash. In those days people paid not just per text they sent, but often to receive them as well.

Mobiles have since become more ubiquitous, intimate fixtures, and we no longer pay to receive messages, nor do many of us pay for individual texts.

Palmer’s party admits to receiving more than 3,000 complaints (which he claims were robo-calls by trade unions), and he says there’s more to come. But why risk alienating the very people you are reaching out to? And how, if at all, does the law regulate such in-your-face campaign techniques?

The law on ‘mexting’?

For once, the legal how is easier than the political why. The national Spam Act of 2003 regulates unsolicited electronic messages via telephone and email. But only commercial messages, about goods and services or investments, are prohibited.

Social and political advocacy is not treated as suspect. On the contrary, it is encouraged. The Privacy Act, in particular, lets MPs and parties collect data on citizens’ views, to better personalise their messages.

Exempting politicians from privacy laws is based on the philosophy that freedom of political communication is vital to Australia’s democratic process.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


Even when government agencies, charities or political parties offer services or solicit donations or membership, they are given a free hand. All they have to do is include a link about who authorised the message.

The licence to advocate, provided it is not done anonymously, is an old one under electoral law in English-speaking democracies. The obligation to “tag” messages enables the speaker to be traced and helps us discount the source of political opinions.




Read more:
Don’t be distracted by an SMS in the same-sex marriage survey debate


That is merely a rule about form, not manner or content. When it comes to manner, there are laws against offensive messages via mass media – whether broadcast or sent by post. (Good luck enforcing that rule in the back passages of the internet.)

There are also, famously, rules against discriminatory “hate” speech.

When it comes to content, you need to avoid defaming people. But there is no general requirement of truth, in the media or in politics, outside rules against misleading parliament, and a limited offence of materially false, paid, election-time ads in South Australia.

At the 2016 general election, the Labor Party dismayed the government and many observers, by mexting as part of its so-called “Mediscare” campaign. The texts looked like they came from Medicare itself. The trick led to a tightening of rules and a new offence of “impersonating” a Commonwealth body.

Other in-your-face campaign methods

Mexting sits in a long line of in-your-face campaign methods. The century old tradition of handing out flyers lives on, as letterboxes in marginal electorates will surely testify later this year.

Another was the “soap box” speech, trundled around shopping precincts via a loudspeaker on the back of a ute. In the middle of last century it was so typical that, as a young candidate, Gough Whitlam is said to have campaigned this way via a boat, to reach outlying suburbs not well serviced by roads.

 

Sound trucks show the ‘soap box’ method of campaigning is still used in Japan.
Wikimedia Commons

 

It is all but dead today in Australia, but lives on in the “sound trucks” of Japan.

More recent innovations are the ubiquitous “direct-mail” – a personalised if expensive variant of letterbox stuffing. Plus the “robo-call”, where a pre-recorded message is automatically dialled to thousands of telephones. I well recall picking up my landline, over dinner in 2007, to hear John Howard greet me. He happily ploughed on despite my unflattering response.

As for how, practically, a campaign assembles thousands of valid mobile numbers… well, Palmer’s party says it has no list. It may have hired a marketing firm to send out the texts. Commercial entities, notoriously, collect and trade files of phone numbers, postal and email addresses, and more.

Still, why? A cynic might say that for Palmer, any notoriety is good notoriety. His gambit has people talking about him again. Minor parties expect to alienate people: their goal is to attract a few percent of the vote.

Why major parties employ such tactics is another matter. They have to build broader coalitions of voters. But there is a cost-benefit analysis at work. Electronic messaging can reach swathes of people more cheaply than broadcast advertising, which in any event lacks the reach it once had. And negative advertising, like Mediscare, tends to work.

As it is, modern parties lack mass memberships and cannot rely primarily on organic influence or door-knocking by activists.

So while spamming, in text or audio, seems perverse – and is unlikely to be as effective as targeted or viral messaging on social media, or community-based campaigning – it won’t disappear.

For my part, I won’t grumble about a text from Mr Palmer popping up in my pocket. It beats his huge yellow billboards in terms of a blight on our public spaces.The Conversation

Graeme Orr, Professor of Law, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Uncategorized | Leave a comment

Don’t click that link! How criminals access your digital devices and what happens when they do

Don’t click that link! How criminals access your digital devices and what happens when they do

 

File 20190207 174851 1lwq94r.jpg?ixlib=rb 1.1

A link is a mechanism for data to be delivered to your device.
Unsplash/Marvin Tolentino

 

Richard Matthews, University of Adelaide and Kieren Niĉolas Lovell, Tallinn University of Technology

Every day, often multiple times a day, you are invited to click on links sent to you by brands, politicians, friends and strangers. You download apps on your devices. Maybe you use QR codes.

Most of these activities are secure because they come from sources that can be trusted. But sometimes criminals impersonate trustworthy sources to get you to click on a link (or download an app) that contains malware.

At its core, a link is just a mechanism for data to be delivered to your device. Code can be built into a website which redirects you to another site and downloads malware to your device en route to your actual destination.

When you click on unverified links or download suspicious apps you increase the risk of exposure to malware. Here’s what could happen if you do – and how you can minimise your risk.




Read more:
How suppliers of everyday devices make you vulnerable to cyber attack – and what to do about it


What is malware?

Malware is defined as malicious code that:

will have adverse impact on the confidentiality, integrity, or availability of an information system.

In the past, malware described malicious code that took the form of viruses, worms or Trojan horses.

Viruses embedded themselves in genuine programs and relied on these programs to propagate. Worms were generally stand alone programs that could install themselves using a network, USB or email program to infect other computers.

Trojan horses took their name from the gift to the Greeks during the Trojan war in Homer’s Odyssey. Much like the wooden horse, a Trojan Horse looks like a normal file until some predetermined action causes the code to execute.

Today’s generation of attacker tools are far more sophisticated, and are often a blend of these techniques.

These so-called “blended attacks” rely heavily on social engineering – the ability to manipulate someone to doing something they wouldn’t normally do – and are often categorised by what they ultimately will do to your systems.

What does malware do?

Today’s malware comes in easy to use, customised toolkits distributed on the dark web or by well meaning security researchers attempting to fix problems.

With a click of a button, attackers can use these toolkits to send phishing emails and spam SMS messages to eploy various types of malware. Here are some of them.

  • a remote administration tool (RAT) can be used to access a computer’s camera, microphone and install other types of malware
  • keyloggers can be used to monitor for passwords, credit card details and email addresses
  • ransomware is used to encrypt private files and then demand payment in return for the password
  • botnets are used for distributed denial of service (DDoS) attacks and other illegal activities. DDoS attacks can flood a website with so much virtual traffic that it shuts down, much like a shop being filled with so many customers you are unable to move.
  • crytptominers will use your computer hardware to mine cryptocurrency, which will slow your computer down
  • hijacking or defacement attacks are used to deface a site or embarrass you by posting pornographic material to your social media

 

An example of a defacement attack on The Utah Office of Tourism Industry from 2017.
Wordfence

 




Read more:
Everyone falls for fake emails: lessons from cybersecurity summer school


How does malware end up on your device?

According to insurance claim data of businesses based in the UK, over 66% of cyber incidents are caused by employee error. Although the data attributes only 3% of these attacks to social engineering, our experience suggests the majority of these attacks would have started this way.

For example, by employees not following dedicated IT and information security policies, not being informed of how much of their digital footprint has been exposed online, or simply being taken advantage of. Merely posting what you are having for dinner on social media can open you up to attack from a well trained social engineer.

QR codes are equally as risky if users open the link the QR codes point to without first validating where it was heading, as indicated by this 2012 study.

Even opening an image in a web browser and running a mouse over it can lead to malware being installed. This is quite a useful delivery tool considering the advertising material you see on popular websites.

Fake apps have also been discovered on both the Apple and Google Play stores. Many of these attempt to steal login credentials by mimicking well known banking applications.

Sometimes malware is placed on your device by someone who wants to track you. In 2010, the Lower Merion School District settled two lawsuits brought against them for violating students’ privacy and secretly recording using the web camera of loaned school laptops.

What can you do to avoid it?

In the case of the the Lower Merion School District, students and teachers suspected they were being monitored because they “saw the green light next to the webcam on their laptops turn on momentarily.”

While this is a great indicator, many hacker tools will ensure webcam lights are turned off to avoid raising suspicion. On-screen cues can give you a false sense of security, especially if you don’t realise that the microphone is always being accessed for verbal cues or other forms of tracking.

 

Facebook CEO Mark Zuckerberg covers the webcam of his computer. It’s commonplace to see information security professionals do the same.
iphonedigital/flickr

 

Basic awareness of the risks in cyberspace will go a long the way to mitigating them. This is called cyber hygiene.

Using good, up to date virus and malware scanning software is crucial. However, the most important tip is to update your device to ensure it has the latest security updates.

Hover over links in an email to see where you are really going. Avoid shortened links, such as bit.ly and QR codes, unless you can check where the link is going by using a URL expander.

What to do if you already clicked?

If you suspect you have malware on your system, there are simple steps you can take.

Open your webcam application. If you can’t access the device because it is already in use this is a telltale sign that you might be infected. Higher than normal battery usage or a machine running hotter than usual are also good indicators that something isn’t quite right.

Make sure you have good anti-virus and anti-malware software installed. Estonian start-ups, such as Malware Bytes and Seguru, can be installed on your phone as well as your desktop to provide real time protection. If you are running a website, make sure you have good security installed. Wordfence works well for WordPress blogs.

More importantly though, make sure you know how much data about you has already been exposed. Google yourself – including a Google image search against your profile picture – to see what is online.

Check all your email addresses on the website haveibeenpwned.com to see whether your passwords have been exposed. Then make sure you never use any passwords again on other services. Basically, treat them as compromised.

Cyber security has technical aspects, but remember: any attack that doesn’t affect a person or an organisation is just a technical hitch. Cyber attacks are a human problem.

The more you know about your own digital presence, the better prepared you will be. All of our individual efforts better secure our organisations, our schools, and our family and friends.The Conversation

Richard Matthews, Lecturer Entrepreneurship, Commercialisation and Innovation Centre | PhD Candidate in Image Forensics and Cyber | Councillor, University of Adelaide and Kieren Niĉolas Lovell, Head of TalTech Computer Emergency Response Team, Tallinn University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Uncategorized | Leave a comment

Face recognition technology in classrooms is here – and that’s ok

Face recognition technology in classrooms is here – and that’s ok

 

File 20190210 174883 e00kur.jpg?ixlib=rb 1.1

Facial recognition is already in our schools.
www.shutterstock.com

 

Brian Lovell, The University of Queensland

Recently, the Victorian Government brought in new rules stating Victorian state schools will be banned from using facial recognition technology in classrooms unless they have the approval of parents, students and the Department of Education.

Students may be justifiably horrified at the thought of being monitored as they move throughout the school during the day. But a roll marking system could be as simple as looking at a tablet or iPad once a day instead of being signed off on a paper roll. It simply depends on the implementation.

Trials have already begun in independent schools in NSW and up to 100 campuses across Australia. According to the developers, the technology promises to save teachers up to 2.5 hours a week by replacing the need for them to mark the roll at the start of every class.




Read more:
I should know you: ‘face blindness’ and the problem of identifying others


Many students now have smart phones that recognise faces right now. There are also downloadable face recognition apps for Android phones and iPhones. So face recognition is already in our schools.

And I argue that, like earlier technologies such as the motor vehicle and mobile phone, a strategy where adoption is managed to create the most good and least harm is appropriate. We shouldn’t simply ban it.

How does it work?

Face recognition technology uses a camera to capture a face and then matches this face against a database to determine identity. First, the face or faces must be detected and localised in the camera frame. Then, face images are aligned and rescaled to a standard size. Finally, these faces are matched against a database. Matching is almost invariably performed using artificial intelligence technology.

We are now in a golden age of face recognition. The main reason for rapid adoption is recognition accuracy has improved astronomically in recent years with 20 times better accuracy from 2014 to 2018.

Now deep learning – a form of artificial intelligence that uses a machine to do a task that usually requires human intelligence – is used for face recognition and an increasing number of other vision tasks.




Read more:
The future of artificial intelligence: two experts disagree


Saving time

The simple application of this technology proposed for schools is to automate the collection of the student roll call for classes. This is a mandatory compliance requirement imposed by the education department.

Roll call is a menial task currently performed by highly skilled teachers or their assistants. Looplearn, the Melbourne startup running the face recognition trials, estimates approximately 2.5 hours of teaching time a week is wasted through mandatory roll calls.

Student time is also wasted. Most of us remember waiting in line many minutes to get marked off on a roll during our school days. Roll call is not a constructive use of time, but it is required by law.

In wider society, it’s now estimated each of us spends three working weeks of the year simply authenticating ourselves to computers and other people. This is time consumed in providing identity documents, password resets, signing documents, waiting in phone queues, and so on.

Clearly authentication is vitally important, but it is consuming increasing amounts of our daily lives. Time is one resource none of us can ever recover.

Many of us remember how bad and slow airport immigration control was before Australia adopted face recognition. Now we can leave Australia with very short delays using SmartGates.

An electronic image of our passport photo is securely stored within the passport itself. The SmartGate terminal extracts the photo from the passport chip and gives us a blue ticket. We then insert the blue ticket into the SmartGate, look at the camera and wait for the face recognition technology. If the faces match, the gates open.

Privacy concerns

Privacy is often raised as an objection and this issue can never be dismissed lightly. Objections are mostly based on the collection and distribution of the photos. But every school collects photos of their students already and schools have strict control over distribution.

Such controls would necessarily be built into any school certified system. The only fundamental change to the process is whether the teacher or a computer recognises the student.

Commercial face recognition technology is often quite unreliable unless the person cooperates by standing still and looking directly at the camera like a SmartGate. This is quite different from non-cooperative recognition of persons without their knowledge using surveillance cameras. Cooperative face recognition systems are now well-accepted by the public at the borders, and privacy has been carefully considered in their design.

The emerging non-cooperative surveillance systems have greater potential for invasion of privacy, but they are also faster and more convenient. Indeed, Australia is now rolling out facial recognition technology that will see international travellers pass through airports without even producing their passports

We can’t stop the tide – but we can manage it

Face recognition technologies will become widely adopted across society over the coming years. Concerns over implementation and privacy may slow down adoption in some places, but the tide will come in and will change business practices right across the world once that happens.

So who should manage and advise on these changes? Government will certainly have a role, but they need to be well advised and be aware of best practice worldwide. Such a role role is often played by the Biometrics Institute which was established during the development of the SmartGate system to advise on biometrics best practice as well as privacy concerns.




Read more:
Big Brother is watching, but it’s nothing to fret about … honest


This technology has the ability to free up our time and reduce the costs of necessary compliance as has already been demonstrated at the airport. As with all new technologies, face recognition raises legitimate concerns. Constructive policies and dialog are the preferred way forward to gain the maximum benefit for society at large, and to make sure we do the least harm.The Conversation

Brian Lovell, Research Director of the Security and Surveillance Research group; Professor, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Uncategorized | Leave a comment

‘Alexa, call my lawyer!’ Are you legally liable if someone makes a purchase using your virtual assistant?

‘Alexa, call my lawyer!’ Are you legally liable if someone makes a purchase using your virtual assistant?

 

File 20190108 32133 1yubmcp.jpg?ixlib=rb 1.1

If you have voice shopping activated on your voice assistant, anyone in your home could potentially purchase items in your name.
Shutterstock

 

Mark Giancaspro, University of Adelaide

When Amazon launched its Alexa virtual assistant in 2014, it probably didn’t think that a bird would expose a potentially significant legal issue with the device. But an African grey parrot named Rocco, living in Blewbury, England, appears to have done just that.

Last month, Rocco made headlines for his habit of secretly ordering goods through his owner’s voice-activated Alexa device, which charges purchases to the linked Amazon account. The African grey species, which is renowned for its ability to mimic human speech, successfully ordered fruit, vegetables, ice-cream, a kettle, light bulbs and a kite.

Virtual assistants such as Alexa are growing in popularity. The number of users worldwide is projected to reach 1.8 billion by 2021. Unlike some rival models, such as Google Home, Alexa does not have individual voice recognition capability. Since Alexa cannot currently be trained to respond only to a selected person, anyone in your home could purchase items through your account.

Rocco’s ability to manipulate Alexa raises an important question: if someone made an unauthorised purchase on your Alexa device, would you be legally liable to pay for it?

The answer lies in contract law.




Read more:
Digital assistants like Alexa and Siri might not be offering you the best deals


You are responsible

The setting for voice purchasing via Alexa can only be be switched on or off. That is, either the function is deactivated so that no one can make vocal purchase orders at all, or it’s calibrated to require a vocal confirmation code to authorise purchases.

In the first case, you cannot enjoy one of the technology’s most convenient features. In the second case, you are still susceptible to a third party – human or capable animal – overhearing and mimicking your voice to make illegitimate purchases. You must then act swiftly to cancel the order in time.

Amazon’s Conditions of Use, which govern voice purchasing through Alexa, state:

You are responsible for maintaining the confidentiality of your account and password and for restricting access to your account, and you agree to accept responsibility for all activities that occur under your account or password.

A golden rule of Australian contract law is that once you sign a contract you are deemed to have read, understood and accepted the terms – even if you haven’t. This is also the legal position in the US, whose laws govern the Amazon Conditions of Use.

So, when you sign up to use Alexa, you agree to be responsible for any purchases made on the device by you, your resident parrot, a mischievous friend or relative, or an unwelcome burglar. It doesn’t matter whether you intended the purchase or not.




Read more:
There’s a reason Siri, Alexa and AI are imagined as female – sexism


There are exceptions

If your pet is responsible, you will have a stronger case to avoid paying because animals other than humans lack the legal capacity to enter into contracts, so the transaction would be “voidable”. If a human is to blame, which is more likely, there is a legal exception that might still save you having to pay up.

 

African grey parrots are very good at mimicking human speech.

 

 

Under both Australian and US law, where a party to a contract is mistaken about the identity of their counterpart, the contract may be void under the “doctrine of mistake”.

In Australia, this rule applies where parties do not contract face-to-face, which will always be the case when someone orders through Amazon via Alexa. The critical factor is “materiality” – you need to prove that mistaken identity was vitally important to the transaction.

This will be difficult given Amazon has no interest in who specifically is ordering its products, and the Alexa owner would not normally care who at Amazon’s end has processed the order. But the fact someone made a purchase without the owner’s permission in circumstances where they could not reasonably prevent it might suffice as “material” for the courts.

American law is similar. Section 153 of the influential Restatement (Second) of Contracts states that a party can plead mistake and escape the contract where the mistake is material, and:

  1. enforcing the contract would be unconscionable (unjust), or
  2. the other party had reason to know of the mistake or actually caused it through their own fault.

Amazon would never be at fault, nor able to tell if an unauthorised party made a purchase on Alexa, so you would need to prove that the transaction was unjust and that mistaken identity was critically important.

A potential snag is the exception stated in Section 154: this says that Section 153 won’t apply if you and the other party have agreed that you will bear the risk. It might come down to how a court reads the Amazon Conditions of Use.




Read more:
Do I want an always-on digital assistant listening in all the time?


Legal precedents

Recent US court decisions emphasise that the mistake doctrine won’t apply where the other party’s identity is immaterial or irrelevant. Again, it would certainly be relevant where the Alexa owner had no way of preventing the unauthorised purchase (such as criminal activity). Enforcement would be grossly unfair in that situation.

The courts would probably not be as lenient if it were a friend, relative or pet doing the deed, as their use of Alexa is an assumed risk on the owner’s part. But it is still arguable that the owner should be legally excused because they had no involvement whatsoever in the purchase. The nature and value of the products purchased might also weigh into a court’s assessment.

To avoid a costly lawsuit, Alexa owners should deactivate voice purchasing when the unit is unsupervised, or discretely implement and use a confirmation code for voice purchases.

Users should also regularly check their accounts to ensure that any unauthorised purchases are picked up early and cancelled in time.

Finally, consider a dog instead of a parrot.The Conversation

Mark Giancaspro, Lecturer in Law, University of Adelaide

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Uncategorized | Leave a comment

From robo calls to spam texts: annoying campaign tricks that are legal

From robo calls to spam texts: annoying campaign tricks that are legal

 

File 20190116 152968 yjzfxg.jpg?ixlib=rb 1.1

Politicians are allowed to spam you with campaign texts.
from shutterstock.com

 

Graeme Orr, The University of Queensland

“Make Australia Great.” So began several million text messages, sent last week from Clive Palmer’s United Australia Party. Palmer’s bumptious campaign techniques actually predated those of Donald Trump.

But now he is aping Trump’s slogans and nationalism, if with a less reactionary, more third-way ethos. The chances of Palmer rising again, like the proverbial political soufflé, are remote. But what of his campaign methods?

Mass texting (I’ll dub it “mexting”) is nothing new in electoral politics. Fifteen years ago it proved controversial, during a local election on the Gold Coast. Late night texts were sent to target young voters while they were out on the town.

The message – which came from nightclubs, urging voters to keeping licensed venues open all hours – was lost in a backlash. In those days people paid not just per text they sent, but often to receive them as well.

Mobiles have since become more ubiquitous, intimate fixtures, and we no longer pay to receive messages, nor do many of us pay for individual texts.

Palmer’s party admits to receiving more than 3,000 complaints (which he claims were robo-calls by trade unions), and he says there’s more to come. But why risk alienating the very people you are reaching out to? And how, if at all, does the law regulate such in-your-face campaign techniques?

The law on ‘mexting’?

For once, the legal how is easier than the political why. The national Spam Act of 2003 regulates unsolicited electronic messages via telephone and email. But only commercial messages, about goods and services or investments, are prohibited.

Social and political advocacy is not treated as suspect. On the contrary, it is encouraged. The Privacy Act, in particular, lets MPs and parties collect data on citizens’ views, to better personalise their messages.

Exempting politicians from privacy laws is based on the philosophy that freedom of political communication is vital to Australia’s democratic process.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


Even when government agencies, charities or political parties offer services or solicit donations or membership, they are given a free hand. All they have to do is include a link about who authorised the message.

The licence to advocate, provided it is not done anonymously, is an old one under electoral law in English-speaking democracies. The obligation to “tag” messages enables the speaker to be traced and helps us discount the source of political opinions.




Read more:
Don’t be distracted by an SMS in the same-sex marriage survey debate


That is merely a rule about form, not manner or content. When it comes to manner, there are laws against offensive messages via mass media – whether broadcast or sent by post. (Good luck enforcing that rule in the back passages of the internet.)

There are also, famously, rules against discriminatory “hate” speech.

When it comes to content, you need to avoid defaming people. But there is no general requirement of truth, in the media or in politics, outside rules against misleading parliament, and a limited offence of materially false, paid, election-time ads in South Australia.

At the 2016 general election, the Labor Party dismayed the government and many observers, by mexting as part of its so-called “Mediscare” campaign. The texts looked like they came from Medicare itself. The trick led to a tightening of rules and a new offence of “impersonating” a Commonwealth body.

Other in-your-face campaign methods

Mexting sits in a long line of in-your-face campaign methods. The century old tradition of handing out flyers lives on, as letterboxes in marginal electorates will surely testify later this year.

Another was the “soap box” speech, trundled around shopping precincts via a loudspeaker on the back of a ute. In the middle of last century it was so typical that, as a young candidate, Gough Whitlam is said to have campaigned this way via a boat, to reach outlying suburbs not well serviced by roads.

 

Sound trucks show the ‘soap box’ method of campaigning is still used in Japan.
Wikimedia Commons

 

It is all but dead today in Australia, but lives on in the “sound trucks” of Japan.

More recent innovations are the ubiquitous “direct-mail” – a personalised if expensive variant of letterbox stuffing. Plus the “robo-call”, where a pre-recorded message is automatically dialled to thousands of telephones. I well recall picking up my landline, over dinner in 2007, to hear John Howard greet me. He happily ploughed on despite my unflattering response.

As for how, practically, a campaign assembles thousands of valid mobile numbers… well, Palmer’s party says it has no list. It may have hired a marketing firm to send out the texts. Commercial entities, notoriously, collect and trade files of phone numbers, postal and email addresses, and more.

Still, why? A cynic might say that for Palmer, any notoriety is good notoriety. His gambit has people talking about him again. Minor parties expect to alienate people: their goal is to attract a few percent of the vote.

Why major parties employ such tactics is another matter. They have to build broader coalitions of voters. But there is a cost-benefit analysis at work. Electronic messaging can reach swathes of people more cheaply than broadcast advertising, which in any event lacks the reach it once had. And negative advertising, like Mediscare, tends to work.

As it is, modern parties lack mass memberships and cannot rely primarily on organic influence or door-knocking by activists.

So while spamming, in text or audio, seems perverse – and is unlikely to be as effective as targeted or viral messaging on social media, or community-based campaigning – it won’t disappear.

For my part, I won’t grumble about a text from Mr Palmer popping up in my pocket. It beats his huge yellow billboards in terms of a blight on our public spaces.The Conversation

Graeme Orr, Professor of Law, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Uncategorized | Leave a comment