The workplace challenge facing Australia (spoiler alert – it’s not technology)

The workplace challenge facing Australia (spoiler alert – it’s not technology)

 

File 20190227 150721 1yg41e2.jpg?ixlib=rb 1.1

Power imbalances are doing far more to change the way we work than are apps.
Shutterstock

 

Sarah Kaine, University of Technology Sydney

This is part of a major series called Advancing Australia, in which leading academics examine the key issues facing Australia in the lead-up to the 2019 federal election and beyond. Read the other pieces in the series here.


With all the hype around the future of work, you could be forgiven for thinking the biggest issue in the future of employment is the impending takeover of your job by a robot or an algorithm.

Talk about the workplace of the future has become fixated on technological displacement almost to the point of hysteria. There is little doubt that technological development will change the way we work, as it has in the past.

But for most Australians the reality will be much less dramatic. The biggest changes in the working lives of Australians over the past 20 years have arguably not been technological – few of us are sending our avatars to meetings or writing code.

Many of us are, however, lamenting the paradox of feeling overworked yet, at the same time, insecure in our employment. A significant proportion contend with record low wages growth. Others remain less than fully employed.




Read more:
Our culture of overtime is costing us dearly


Some will say that the rate of insecure or non-permanent work has remained fairly constant over the past two decades. This belies the lived experience of workers. They have repeatedly been found to perceive their connections to the workplace and labour market as precarious and laden with personal risk.

Power has been shifting

The changes have often involved the fragmentation or fissuring of work through outsourcing, global supply chains, independent contracting, labour hire and digital labour platforms.

But there has been more to it than the tweaking of business practices. The relationship between business and the state has been subject to a fundamental realignment.

Since the industrial relations changes in the early 1990s, which moved the setting of wages and conditions away from centralised institutions towards the workplace, the locus of power in the labour market has undergone substantial recalibration.

Collective representation of workers has declined sharply. This is recognised as contributing to the wage stagnation being felt in Australia and other rich nations.

A 2018 article in The Economist acknowledged this, noting that while politicians were scrambling for scapegoats and solutions, addressing stagnant wages required “a better understanding of the relationship between pay, productivity and power”.

A crucial aspect of understanding this relationship is recognising the impact that business consolidation has had. It has not only changed the experience of work, but also altered the balance of power between businesses and workers and between some businesses and other businesses.




Read more:
This is what policymakers can and can’t do about low wage growth


The ascendance of global monoliths − such as Walmart, Amazon, Apple and Uber (and the big retailers and e-tailers in Australia) – has resulted in organisations that wield enormous economic and cultural power. This has led not only to a reduction in worker power but also to the creation of a crushingly competitive environment for the businesses that have to contend with contract terms dictated by the corporate giants.

What has been the result of the combination of changing business models, reconfigured institutions and the onslaught of business consolidation?

We have seen hyper-competition based on low labour costs, management approaches that skirt worker protection laws, and weaker regulatory oversight.

It has manifested in almost weekly scandals regarding sham contracting, exploitation of workers and what appears to be an epidemic of underpayment in a roll call of some of Australia’s most “successful” companies, among them 7-Eleven, Caltex and Domino’s) .

We can shift it back

The policy prescription to remedy the scourge of work insecurity and exploitation is decidedly unsexy. It goes against the zeitgeist that seems to suggest that any change that disrupts an existing system, rule or institution should be hailed as “innovative” and be uncontested.

It requires some reflection and the rebirth of aspects of our industrial relations system that have been lost but have redeeming features.

Key among these old-fashioned remedies is the encouragement of workers and employers to organise and recentralise bargaining.

A stated aim of the federal Conciliation and Arbitration Act 1904 was “to facilitate and encourage the organisation of representative bodies of employers and of employees”.

Granted, back then the act was also about keeping industrial peace by preventing lockouts and strikes. This is no longer much of an issue in our era of record low industrial action. But, in the current context of fragmented work, it is unrealistic to expect individual employers and employees to engage in endless rounds of labour-intensive productivity bargaining, with little to show for it.

The economies of scale that were part of a centralised system were lost when workplace-based bargaining system took over. These could be regained, to the advantage of both employees and employers.




Read more:
Bargaining the Qantas way: how not to run an industrial dispute


While that policy prescription looks similar to the original Conciliation and Arbitration system, the rationale behind it differs markedly.

No longer would it be simply about addressing the power dynamics between employers and employees. It would also be about addressing the inequitable power dynamics between mega-corporations and businesses subjected to their might.

A challenge for left and right

Arguments relating to the need for flexibility and regulatory reform, which were the basis of the 1990s decentralisation, were not without merit. Global competition was accelerating and there was a real concern that the Australian economy would not be able to keep pace. So greater agency was given to businesses so they could adjust and lift productivity.

But we are now living in very different times. Neither excessive industrial action nor the spectre of poor productivity looms. It is neither intellectually or politically honest to use these as a basis for opposing proposals to recentralising bargaining.

Also, we need to acknowledge that the biggest beneficiaries of the disaggregation introduced in the early 1990s were the biggest businesses.

A more centralised system could allow employers and employees to combine their power to counter competitive pressures from mega-corporations that want to reduce labour standards. They have facilitated toxic workplace practices, including intensive surveillance, unrealistic performance expectations, avoidance of entitlements and exploitation of workers further down supply chains.

Constructing an industrial relations framework that tackles the insecurity that is being experienced now and the further insecurity that may be wrought by technological change is potentially confronting for both sides of the ideological divide.

Elements on the left may be reluctant to acknowledge that not all businesses are the same, that some are being squashed by the structure of the market. Some on the right might not be prepared to concede that the bright idea of the 1990s – reducing union influence and worker voices – has succeeded so well as to create perverse and economically unhelpful outcomes.

Both sides need to lift their gaze above the workplace. They need to recognise that earlier reforms are no longer the right ones now, admit that individual business might not be the best level at which to manage the technological change, and acknowledge where the new power lies, then act accordingly.




Read more:
Why are unions so unhappy? An economic explanation of the Change the Rules campaign


The Conversation


Sarah Kaine, Associate Professor UTS Centre for Business and Social Innovation, University of Technology Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Business Investigations | Leave a comment

Australians want to support government use and sharing of data, but don’t trust their data will be safe

Australians want to support government use and sharing of data, but don’t trust their data will be safe

 

File 20190226 150715 ffa5h7.jpg?ixlib=rb 1.1

A new survey reveals community attitudes towards the use of personal data by government and researchers.
Shutterstock

 

Nicholas Biddle, Australian National University and Matthew Gray, Australian National University

Never has more data been held about us by government or companies that we interact with. Never has this data been so useful for analytical purposes.

But with such opportunities come risks and challenges. If personal data is going to be used for research and policy purposes, we need effective data governance arrangements in place, and community support (social licence) for this data to be used.

The ANU Centre for Social Research and Methods has recently undertaken a survey of a representative sample of Australians to learn their views about about how personal data is used, stored and shared.

While Australians report a high level of support for the government to use and share data, there is less confidence that the government has the right safeguards in place or can be trusted with people’s data.




Read more:
Soft terms like ‘open’ and ‘sharing’ don’t tell the true story of your data


What government should do with data

In the ANUPoll survey of more than 2,000 Australian adults (available for download at the Australian Data Archive) we asked:

On the whole, do you think the Commonwealth Government should or should not be able to do the following?

Six potential data uses were given.

 

Do you think the Commonwealth Government should or should not be able to … ?
ANU Centre for Social Research and Methods Working Paper

 

Overall, Australians are supportive of the Australian government using data for purposes such as allocating resources to those who need it the most, and ensuring people are not claiming benefits to which they are not entitled.

They were slightly less supportive about providing data to researchers, though most still agreed or strongly agreed that it was worthwhile.

Perceptions of government data use

Community attitudes to the use of data by government are tied to perceptions about whether the government can keep personal data secure, and whether it’s behaving in a transparent and trustworthy manner.

To measure views of the Australian population on these issues, respondents were told:

Following are a number of statements about the Australian government and the data it holds about Australian residents.

They were then asked to what extent they agreed or disagreed that the Australian government:

  • could respond quickly and effectively to a data breach
  • has the ability to prevent data being hacked or leaked
  • can be trusted to use data responsibly
  • is open and honest about how data are collected, used and shared.

Respondents did not express strong support for the view that the Australian government is able to protect people’s data, or is using data in an appropriate way.

 

To what extent do you agree or disagree that the Australian Government … ?
ANU Centre for Social Research and Methods Working Paper

 




Read more:
What are tech companies doing about ethical use of data? Not much


We also asked respondents to:

[think] about the data about you that the Australian Government might currently hold, such as your income tax data, social security records, or use of health services.

We then asked for their level of concern about five specific forms of data breaches or misuse of their own personal data.

We found that there are considerable concerns about different forms of data breaches or misuse.

More than 70% of respondents were concerned or very concerned about the accidental release of personal information, deliberate hacking of government systems, and data being provided to consultants or private sector organisations who may misuse the data.

 

Level of concern about specific forms of data breaches or misuse of a person’s own data …
ANU Centre for Social Research and Methods Working Paper

 

More than 60% were concerned or very concerned about their data being used by the Australian government to make unfair decisions. And more than half were concerned or very concerned about their data being provided to academic researchers who may misuse their information.




Read more:
Facebook’s data lockdown is a disaster for academic researchers


Trust in government to manage data

The data environment in Australia is changing rapidly. More digital information about us is being created, captured, stored and shared than ever before, and there is a greater capacity to link information across multiple sources of data, and across multiple time periods.

While this creates opportunities, it also creates the risk that the data will be used in a way that is not in our best interests.

There is policy debate at the moment about how data should be used and shared. If we don’t make use of the data available, that has costs in terms of worse service delivery and less effective government. So, locking data up is not a cost-free option.

But sharing data or making data available in a way that breaches people’s privacy can be harmful to individuals, and may generate a significant (and legitimate) public backlash. This would reduce the chance of data being made available in any form, and mean that the potential benefits of improving the wellbeing of Australians are lost.

If government, researchers and private companies want to be able to make use of the richness of the new data age, there is an urgent and continuing need to build up trust across the population, and to put policies in place that reassure consumers and users of government services.The Conversation

Nicholas Biddle, Associate Professor, ANU College of Arts and Social Sciences, Australian National University and Matthew Gray, Director, ANU Centre for Social Research and Methods, Australian National University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Uncategorized | Leave a comment

Your car is more likely to be hacked by your mechanic than a terrorist

Your car is more likely to be hacked by your mechanic than a terrorist

 

File 20190227 150694 8e5x9s.jpg?ixlib=rb 1.1

Lego Mechanic might look sweet and innocent, but what’s that smile really hiding?
Flickr/Jeff Eaton, CC BY-NC-SA

 

Richard Matthews, University of Adelaide

When it comes to car hacking, you should be more worried about dodgy dealers than one-off hackers with criminal intent.

Hollywood would have us believe our cars are extremely vulnerable to hackers. A hacker remotely logs into the onboard computer of a car on display in a showroom, causing the car to burst through the glass out onto the street – just in the nick of time to block a car chase.

 

Car hacking scene in Hollywood blockbuster The Fate of the Furious.

 

And researchers have had some success replicating such a scenario. In 2015, headlines were made all over the world when security researchers were able to hack a Jeep Cherokee. They remotely controlled everything from windscreen wipers and air conditioning to the car’s ability to accelerate. Ultimately they crashed the car on a nearby embankment, safely ending their experiment.

If you believed everything that has been written since, you would think we are all driving around in accidents waiting to happen. At a moment’s notice any criminal could hack your vehicle, seize control and kill everyone inside.

While this threat may exist, it has never happened in the real world – and it’s significantly overhyped.




Read more:
Here’s how we can stop driverless cars from being hacked


Cars are now controlled by computers

Today’s motor vehicles are a complicated system of interconnected electrical sub-systems, where traditional mechanical connections have been replaced with electrical counterparts.

Take the accelerator, for example. This simple device used to be controlled by a physical cable connected to a valve on the engine. Today it is controlled by drive-by-wire system.

Under a drive-by-wire system, the position of the throttle valve is controlled by a computer. This computer receives signals from the accelerator and correspondingly instructs a small motor connected to the throttle valve. Many of the engineering benefits are unnoticed by a typical consumer, but this system allows an engine to run more smoothly.

A failure of the drive-by-wire system was suspected to be the cause of unintended acceleration in 2002 Toyota vehicles. The fault resulted in at least one fatal crash, in 2017, being settled outside of court. An analysis commissioned by the US National Highway Traffic Safety Administration could not rule out software error, but did find significant mechanical defects in pedals.

These were ultimately errors in quality, not hacked cars. But it does introduce an interesting scenario. What if someone could program your accelerator without your knowledge?

Hack the computer and you can control the car

The backbone of today’s modern interconnected vehicle is a protocol called a Controller Area Network (CAN bus). The network is built on the principle of a master control unit, with multiple slave devices.

Slave devices in our car could be anything from the switch on the inside of your door, to the roof light, and even the steering wheel. These devices allow inputs from the master unit. For example, the master unit could receive a signal from a door switch and based on this send a signal to the roof light to turn it on.

The problem is, if you have physical access to the network you can send and receive signals to any devices connected to it.

While you do need physical access to breach the network, this is easily accessible via an onboard diagnostic port hidden out of sight under your steering wheel. Devices such as Bluetooth, cellular and Wi-Fi, which are being added to cars, can also provide access, but not as easily as simply plugging in.

Bluetooth, for example, only has a limited range, and to access a car via Wi-Fi or cellular you still require the vehicle’s IP address and access to the Wi-Fi password. The Jeep hack mentioned above was enabled by weak default passwords chosen by the manufacturer.




Read more:
Australia’s car industry needs cybersecurity rules to deal with the hacking threat


Enter the malevolent mechanic

Remote car hacks aren’t particularly easy, but that doesn’t mean it’s OK to be lured into a false sense of security.

The Evil Maid attack is a term coined by security analyst Joanna Rutkowska. It’s a simple attack due to the prevalence of devices left insecure in hotel rooms around the world.

The basic premise of the attack is as follows:

  1. the target is away on holiday or business with one or more devices
  2. these devices are left unattended in the target’s hotel room
  3. the target assumes the devices are secure since they are the only one with the key to the room, but then the maid comes in
  4. while the target is away, the maid does something to the device, such as installing malware or even physically opening up the device
  5. the target has no idea and is breached.

If we look at this attack in the context of the CAN bus protocol it quickly becomes apparent the protocol is at its weakest when physical access is granted. Such access is granted to trusted parties whenever we get our vehicles serviced, when it’s out of our sight. The mechanic is the most likely “maid”.

As part of a good maintenance routine your mechanic will plug a device into the On Board Diagnostic (ODB) port to ensure there are no fault or diagnostic codes for the vehicle that need to be resolved.

 

An example of an On Board Diagnostic (OBD) port in a car. This port is normally under the steering wheel.
endolith/flickr

 

But, what would happen if a mechanic needed some extra business? Perhaps they wanted you to come back for service more often. Could they program your electronic brake sensor to trigger early by manipulating a control algorithm? Yes, and this would result in a lower life for your brake pads.

Maybe they could modify one of the many computers within your vehicle so that it logs more kilometres than are actually being done? Or if they wanted to hide the fact they had taken your Ferrari for a spin, they could program the computer to wind back the odometer. Far easier than the manual method, which ended so badly in the 1986 film Ferris Bueller’s Day Off.

 

 

 

All of these are viable hacks – and your mechanic could be doing it right now.




Read more:
We asked people if they would trust driverless cars


The case for verification and transparency

This isn’t a new problem. It’s no different from a used car dealer using a drill to run the speedo back to show a lower mileage. New technologies just mean the same tricks could be implemented in different ways.

Unfortunately, there is little that could be done to prevent a bad mechanic from doing such things.

Security researchers are currently focused on improving the security behind the CAN bus protocol. The likely reason no major incident has been reported to date is the CAN bus relies on its obscure implementation for security.

Verification and transparency could be a solution. A system, proposed by researchers at Blackhat, involves an audit log that could assist everyday people in assessing the risks to any unauthorised changes to their vehicle, and improve the robustness of the system.

Until then, we will just have to keep using a trusted mechanic.The Conversation

Richard Matthews, Lecturer Entrepreneurship, Commercialisation and Innovation Centre | PhD Candidate in Image Forensics and Cyber | Councillor, University of Adelaide

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Uncategorized | Leave a comment

Receiving a login code via SMS and email isn’t secure. Here’s what to use instead

Receiving a login code via SMS and email isn’t secure. Here’s what to use instead

 

File 20190305 92307 1whd8tk.jpg?ixlib=rb 1.1

No method is perfect, but physical security keys are a reliable form of multi-factor authentication.
Shutterstock

 

Mike Johnstone, Edith Cowan University

When it comes to personal cybersecurity, you might think you’re doing alright. Maybe you’ve got multi-factor authentication set up on your phone so that you have to enter a code sent to you by SMS before you can log in to your email or bank account from a new device.

What you might not realise is that new scams have made authentication using a code sent by SMS messages, emails or voice calls less secure than they used to be.

Multi-factor authentication is listed in the Australian Cyber Security Centre’s Essential Eight Maturity Model as a recommended security measure for businesses to reduce their risk of cyber attack.

Last month, in an updated list, authentication via SMS messages, emails or voice calls was downgraded, indicating they’re no longer considered optimal for security.

Here’s what you should do instead.

What is multi-factor authentication?

Whenever we log in to an app or device, we are usually asked for some form of identity check. This is often something we know (like a password), but it can also be something we have (like a security key or an access card) or something we are (like a fingerprint).

The last of these is often preferred because, while you can forget a password or a card, your biometric signature is always with you.

Multi-factor authentication is when more than one identity check is conducted via different channels. For instance, it’s common these days to enter your password, and an extra authentication code you need to enter is sent to your phone via SMS message, email or voice mail.

Lots of services, such as banks, already offer this feature. You’re sent a “one-time” code to your phone in order to confirm authority to enact a transaction.

This is good because:

  • it uses two separate channels
  • the code is randomly generated, so it can’t be guessed
  • the code has a limited lifetime

How could this go wrong?

Suppose a cybercriminal has stolen your phone, but you have it locked via fingerprint. If the criminal wants to compromise your bank account and attempts to log in, your bank sends an authentication code to your phone.

Depending on how your phone settings are configured, the code could pop-up on your phone screen, even when it’s still locked. The criminal could then input the code and access your bank account. Note that “do not disturb” settings on your phone won’t help as the message still appears, albeit quietly. In order to avoid this problem, you need to disable message previews entirely in your phone’s settings.

A more elaborate hack involves “SIM swapping”. If a criminal has some of your identity details, they might be able to convince your phone provider that they are you and request a new SIM attached to your phone number to be sent to them. That way, any time an authentication code is sent from one of your accounts, it will go to the hacker instead of you.

This happened to a technology journalist in the US a couple of years ago, who described the experience:

At about 9pm on Tuesday, August 22 a hacker swapped his or her own SIM card with mine, presumably by calling T-Mobile. This, in turn, shut off network services to my phone and, moments later, allowed the hacker to change most of my Gmail passwords, my Facebook password, and text on my behalf. All of the two-factor notifications went, by default, to my phone number so I received none of them and in about two minutes I was locked out of my digital life.

Then there is the question of whether you want to provide your phone number to the service you are using. Facebook has come under fire in recent days for requiring users to provide their phone number to secure their accounts, but then allowing others to search for their profile via their phone number. They have also reportedly used phone numbers to target users with ads.

This is not to say that splitting identity checks is a bad thing, it’s just that sending part of an identity check via a less-secure channel promotes a false sense of security that could be worse than using no security at all.

Multi-factor authentication is important – as long as you do it via the right channels.

Which authentication combinations are best?

Let’s consider some combinations of multi-factor authentication that have varying degrees of ease of use and security.

An obvious first choice is something you know and something you have, say a password and a physical access card. A cybercriminal has to obtain both to impersonate you. Not impossible, but difficult.

Another combination is a password and a voiceprint. A voiceprint recognition system records you speaking a particular passphrase and then matches your voice when you need to authenticate your identity. This is attractive because you can’t leave your voice at home or in the car.

But could your voice be forged? With the aid of digital software, it might be possible to take an existing recording of your voice, unpack and re-sequence it to produce the required phrase. This is somewhat challenging, but not impossible.

A third combination is a card and a voiceprint. This choice removes the need to remember a password, which could be stolen, and as long as you keep the physical token (the card or key) safe, it is very hard for someone else to impersonate you.

There are no perfect solutions yet and using the most secure version of authentication depends on it being offered by the service you are using, such as your bank.

Cyber security is about managing risk, so which combination of multi-factor authentication suits your needs depends on the balance you accept between usability and security.The Conversation

Mike Johnstone, Security Researcher, Associate Professor in Resilient Systems, Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Uncategorized | Leave a comment

Fingerprint and face scanners aren’t as secure as we think they are

Fingerprint and face scanners aren’t as secure as we think they are

 

File 20190304 110110 1tgw1we.jpg?ixlib=rb 1.1

Biometric systems are increasingly used in our civil, commercial and national defence applications.
Shutterstock

 

Wencheng Yang, Edith Cowan University and Song Wang, La Trobe University

Despite what every spy movie in the past 30 years would have you think, fingerprint and face scanners used to unlock your smartphone or other devices aren’t nearly as secure as they’re made out to be.

While it’s not great if your password is made public in a data breach, at least you can easily change it. If the scan of your fingerprint or face – known as “biometric template data” – is revealed in the same way, you could be in real trouble. After all, you can’t get a new fingerprint or face.

Your biometric template data are permanently and uniquely linked to you. The exposure of that data to hackers could seriously compromise user privacy and the security of a biometric system.

Current techniques provide effective security from breaches, but advances in artificial intelligence (AI) are rendering these protections obsolete.




Read more:
Receiving a login code via SMS and email isn’t secure. Here’s what to use instead


How biometric data could be breached

If a hacker wanted to access a system that was protected by a fingerprint or face scanner, there are a number of ways they could do it:

  1. your fingerprint or face scan (template data) stored in the database could be replaced by a hacker to gain unauthorised access to a system
  2. a physical copy or spoof of your fingerprint or face could be created from the stored template data (with play doh, for example) to gain unauthorised access to a system
  3. stolen template data could be reused to gain unauthorised access to a system
  4. stolen template data could be used by a hacker to unlawfully track an individual from one system to another.

 

 

 

Biometric data need urgent protection

Nowadays, biometric systems are increasingly used in our civil, commercial and national defence applications.

Consumer devices equipped with biometric systems are found in everyday electronic devices like smartphones. MasterCard and Visa both offer credit cards with embedded fingerprint scanners. And wearable fitness devices are increasingly using biometrics to unlock smart cars and smart homes.

So how can we protect raw template data? A range of encryption techniques have been proposed. These fall into two categories: cancellable biometrics and biometric cryptosystems.




Read more:
When your body becomes your password, the end of the login is nigh


In cancellable biometrics, complex mathematical functions are used to transform the original template data when your fingerprint or face is being scanned. This transformation is non-reversible, meaning there’s no risk of the transformed template data being turned back into your original fingerprint or face scan.

In a case where the database holding the transformed template data is breached, the stored records can be deleted. Additionally, when you scan your fingerprint or face again, the scan will result in a new unique template even if you use the same finger or face.

In biometric cryptosystems, the original template data are combined with a cryptographic key to generate a “black box”. The cryptographic key is the “secret” and query data are the “key” to unlock the “black box” so that the secret can be retrieved. The cryptographic key is released upon successful authentication.

AI is making security harder

In recent years, new biometric systems that incorporate AI have really come to the forefront of consumer electronics. Think: smart cameras with built-in AI capability to recognise and track specific faces.

But AI is a double-edged sword. While new developments, such as deep artificial neural networks, have enhanced the performance of biometric systems, potential threats could arise from the integration of AI.

For example, researchers at New York University created a tool called DeepMasterPrints. It uses deep learning techniques to generate fake fingerprints that can unlock a large number of mobile devices. It’s similar to the way that a master key can unlock every door.

Researchers have also demonstrated how deep artificial neural networks can be trained so that the original biometric inputs (such as the image of a person’s face) can be obtained from the stored template data.




Read more:
Facial recognition is increasingly common, but how does it work?


New data protection techniques are needed

Thwarting these types of threats is one of the most pressing issues facing designers of secure AI-based biometric recognition systems.

Existing encryption techniques designed for non AI-based biometric systems are incompatible with AI-based biometric systems. So new protection techniques are needed.

Academic researchers and biometric scanner manufacturers should work together to secure users’ sensitive biometric template data, thus minimising the risk to users’ privacy and identity.

In academic research, special focus should be put on two most important aspects: recognition accuracy and security. As this research falls within Australia’s science and research priority of cybersecurity, both government and private sectors should provide more resources to the development of this emerging technology.The Conversation

Wencheng Yang, Post Doctoral Researcher, Security Research Institute, Edith Cowan University and Song Wang, Senior Lecturer, Engineering, La Trobe University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Uncategorized | Leave a comment