What are tech companies doing about ethical use of data? Not much

What are tech companies doing about ethical use of data? Not much


File 20181126 149329 1x06l9k.jpg?ixlib=rb 1.1

Tech companies have an economic imperative to avoid grappling too seriously with the ethical issues surrounding data usage.


James Arvanitakis, Western Sydney University

Our relationship with tech companies has changed significantly over the past 18 months. Ongoing data breaches, and the revelations surrounding the Cambridge Analytica scandal, have raised concerns about who owns our data, and how it is being used and shared.

Tech companies have vowed to do better. Following his grilling by both the US Congress and the EU Parliament, Facebook CEO, Mark Zuckerberg, said Facebook will change the way it shares data with third party suppliers. There is some evidence that this is occurring, particularly with advertisers.

But have tech companies really changed their ways? After all, data is now a primary asset in the modern economy.

To find whether there’s been a significant realignment between community expectations and corporate behaviour, we analysed the data ethics principles and initiatives that various global organisations have committed since the various scandals broke.

What we found is concerning. Some of the largest organisations have not demonstrably altered practices, instead signing up to ethics initiatives that are neither enforced nor enforceable.

Read more:
Big Data is useful, but we need to protect your privacy too

How we tracked this information

Before discussing our findings, some points of clarification.

Firstly, the issues of data, artificial intelligence (AI), machine learning and algorithms are difficult to draw apart, and their scope is contested. In fact, for most of these organisations, the concepts are lumped together, while for researchers and policy makers they present distinctly different challenges.

For example, machine learning, while a branch of AI, is about building machines to learn on their own without supervision. As such, policy makers must ensure that the machine learning algorithms are free from bias and take into consideration various social and economic issues, rather than treating everyone the same.

Secondly, the policies, statements and guidelines of the companies we looked at are not centrally located, consistently presented or simple to decipher.

Accounting for the lack of consistent approach to data ethics taken by technology companies, our method was to survey visible steps undertaken, and to look at the broad ethical principles embraced.

Five broad categories of data ethics

Some companies, such as Microsoft, IBM, and Google, have published their own AI ethical principles.

More companies, including Facebook and Amazon, have opted to keep an arm’s length approach to ethics by joining consortiums, such as Partnership on AI (PAI) and the Information and Technology Industry Council (ITI). These two consortiums have published statements containing ethical principles. The principles are voluntary, and have no reporting requirements, objective standards or oversight.

Read more:
Big Data is useful, but we need to protect your privacy too

We examined the content of the published ethical guidelines of these companies and consortiums, and found the principles fell into five broad categories.

  1. Privacy: privacy is widely acknowledged as an area of importance, highlighting that the focus for most of these organisations is a traditional consumer/supplier relationship. That is, the data provided by the consumers is now owned by the company, who will use this data, but respect confidentiality
  2. governance: these principles are about accountability in data management, ensuring quality and accuracy of data, and the ethical application of algorithms. The focus here is on the internal processes that should be followed
  3. fairness: fairness means using data and algorithms in a way that respects the person behind the data. That means taking safety into consideration, and recognising the impact the use of data can have on people’s lives. This includes a recognition of how algorithms relying on historical data or flawed programming can discriminate against marginalised communities
  4. shared benefit: this refers to the idea that data is owned by those that produce it and, as such, there should be joint control of the data, as well as shared benefits. We noted a lack of consensus or intention to adhere to this category
  5. transparency: it is here that a more nuanced understanding of data ownership begins to emerge. Transparency essentially refers to being open about the way data is collected and used, as well as eschewing unnecessary data collection. Given the commercial imperative of companies to protect confidential research and development, it’s not surprising this principle is only acknowledged by a handful of players.


Initiatives big tech companies have signed up to in particular categories of data ethics.
Author provided


Fairness and transparency is important

Our research suggests conversations about data ethics are largely focused on privacy and governance. But these principles are the minimum expected in a legal framework. If anything, the scandals of the past have shown us this is not enough.

Facebook is notable as company keeping an arm’s length approach to ethics. It’s a member of Partnership on AI and Information and Technology Industry Council, but has eschewed publication of its own data ethics principles. And while there have been rumblings about a so-called “Fairness Flow” machine learning bias detector, and rumours of an ethics team at Facebook, details for either of these developments are sketchy.

Meanwhile, the extent to which Partnership on AI and Information and Technology Industry Council influence the behaviour of member companies is highly questionable. The Partnership on AI, which has more than 70 members, was formed in 2016, but it has yet to demonstrate any tangible outcomes beyond the publication of key tenets.

Read more:
We need to talk about the data we give freely of ourselves online and why it’s useful

Better regulation is required

For tech companies, there may be a trade-off between treating data ethically and how much money they can make from that data. There is also a lack of consensus between companies about what the correct ethical approach looks like. So, in order to protect the public, external guidance and oversight is needed.

Unfortunately, the government has currently kept its focus in the new Australian Government Data Sharing and Release Legislation on privacy – a principle that’s covered in legislation elsewhere.

The data related events of the last few years have confirmed we need a greater focus on data as a citizen right, not just a consumer right. As such, we need greater focus on fairness, transparency and shared benefit – all areas which are currently being neglected by companies and government alike.

The author would like to acknowledge the significant contribution made to this article by Laura Hill, a student in the Western Sydney University Bachelor of Law graduate entry program.The Conversation

James Arvanitakis, Professor in Cultural and Social Analysis, Western Sydney University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Uncategorized | Leave a comment

Thanks for the $2 billion for small-business expansion; now all we need are plans to expand

Thanks for the $2 billion for small-business expansion; now all we need are plans to expand


File 20181205 186061 1g8owm5.jpg?ixlib=rb 1.1

Most small and medium-sized companies in Australia have no written plan for future growth.


Jana Matthews, University of South Australia

The Australian government has a plan to help the nation’s small and medium-sized businesses – but it’s not a very well-developed one.

Its cornerstone is A$2 billion for a “Securitisation Fund” to provide loans to small business through smaller banks and non-bank lenders, plus a “Business Growth Fund” that will enable big banks and super funds to take passive equity stakes in small business. The assumption is that more money will help small and medium-sized enterprises fund their expansion plans.

The problem is most small and medium companies do not have expansion plans.

The Australian Bureau of Statistics’ first management capability survey, published in August 2017, indicates that only a third of medium-sized companies (defined as those with 20 to 199 employees) have a written plan of any kind. The percentage is much lower for small companies – those with five to 19 employees (and lower still for micro-businesses, employing four or fewer people).

The Australian Centre for Business Growth at the University of South Australia collects data and delivers programs for those running small and medium-sized companies. Since it began in 2014, the centre has worked with more than 1,500 business proprietors. They all wanted to do better. Hardly any of them had a plan.

The federal government’s assistance package seems to assume that access to more capital will accelerate company growth. Our experience suggests executives of small and medium companies need knowledge capital as well as financial capital to grow.

If they don’t know what to do when, who to hire, how to manage people, or how to plan and execute, simply providing more money does not accelerate growth.

This year we collected data from 145 of the companies that have been through one of our growth programs. In the past financial year they increased their revenues, on average, by 27%, their profits by 19%, and employment by 32%. Their growth was the result of learning how to develop and execute an expansion plan, that is, knowing how to generate and where to deploy financial capital in order to grow.

Passing a public-interest test

It’s true that small and medium enterprises need money for growth. But before they get funding, they need to learn what to do with that money to grow.

Driver’s education and a proficiency test are compulsory before we give people a licence to drive a car. We need something equivalent before we provide public funds – even as loans – to businesses.

We justify allowing people to borrow against their homes to start a company because it’s “their decision” and “their money”. But if taxpayers’ money is being provided, the federal government has a duty of care to ensure every company has a comprehensive plan for growth before receiving funding.

That plan should cover all the bases from products to markets and customers; culture, people and organisation; finance; risks and externalities; governance and strategy.

If financial capital is not coupled with knowledge capital, investments in small and medium-sized companies will not deliver returns. Only after those running a company understand how to grow, and have a plan to grow, will they achieve what we all want – more jobs, higher wages, and greater economic prosperity for all.The Conversation

Jana Matthews, ANZ Chair in Business Growth. Director, Australian Centre for Business Growth, University of South Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Business Investigations | Leave a comment

Conform to the social norm: why people follow what other people do

Conform to the social norm: why people follow what other people do


File 20181123 149332 rgzoch.jpg?ixlib=rb 1.1

Some people just follow the social norm, whether it’s right or not.


Campbell Pryor, University of Melbourne and Piers Howe, University of Melbourne

Why do people tend to do what others do, prefer what others prefer, and choose what others choose?

Our study, published today in Nature Human Behaviour, shows that people tend to copy other people’s choices, even when they know that those people did not make their choices freely, and when the decision does not reflect their own actual preferences.

It is well established that people tend to conform to behaviours that are common among other people. These are known as social norms.

Yet our finding that people conform to other’s choices that they know are completely arbitrary cannot be explained by most theories of this social norm effect. As such, it sheds new light on why people conform to social norms.

Read more:
Digital assistants like Alexa and Siri might not be offering you the best deals

Would you do as others do?

Imagine you have witnessed a man rob a bank but then he gives the stolen money to an orphanage. Do you call the police or leave the robber be, so the orphanage can keep the money?

We posed this moral dilemma to 150 participants recruited online in our first experiment. Before they made their choice, we also presented information about how similar participants in a previous experiment had imagined acting during this dilemma.

Half of our participants were told that most other people had imagined reporting the robber. The remaining half were told that most other people had imagined not calling the police.

Crucially, however, we made it clear to our participants that these norms did not reflect people’s preferences. Instead, the norm was said to have occurred due to some faulty code in the experiment that randomly allocated the previous participants to imagining reporting or not reporting the robber.

This made it clear that the norms were arbitrary and did not actually reflect anybody’s preferred choice.

Whom did they follow?

We found that participants followed the social norms of the previous people, even though they knew they were entirely arbitrary and did not reflect anyone’s actual choices.

Simply telling people that many other people had been randomly allocated to imagine reporting the robber increased their tendency to favour reporting the robber.

A series of subsequent experiments, involving 631 new participants recruited online, showed that this result was robust. It held over different participants and different moral dilemmas. It was not caused by our participants not understanding that the norm was entirely arbitrary.

Why would people behave in such a seemingly irrational manner? Our participants knew that the norms were arbitrary, so why would they conform to them?

Is it the right thing to do?

One common explanation for norm conformity is that, if everyone else is choosing to do one thing, it is probably a good thing to do.

The other common explanation is that failing to follow a norm may elicit negative social sanctions, and so we conform to norms in an effort to avoid these negative responses.

Neither of these can explain our finding that people conform to arbitrary norms. Such norms offer no useful information about the value of different options or potential social sanctions.

Instead, our results support an alternative theory, termed self-categorisation theory. The basic idea is that people conform to the norms of certain social groups whenever they have a personal desire to feel like they belong to that group.

Importantly, for self-categorisation theory it does not matter whether a norm reflects people’s preference, as long as the behaviour is simply associated with the group. Thus, our results suggest that self-categorisation may play a role in norm adherence.

The cascade effect

But are we ever really presented with arbitrary norms that offer no rational reason for us to conform to them? If you see a packed restaurant next to an empty one, the packed restaurant must be better, right?


It’s a busy restaurant so it must be good, right?


Well, if everyone before you followed the same thought process, it is perfectly possible that an initial arbitrary decision by some early restaurant-goers cascaded into one restaurant being popular and the other remaining empty.

Termed information cascade, this phenomenon emphasises how norms can snowball from potentially irrelevant starting conditions whenever we are influenced by people’s earlier decisions.

Defaults may also lead to social norms that do not reflect people’s preferences but instead are driven by our tendency towards inaction.

For example, registered organ donors remain a minority in Australia, despite most Australians supporting organ donation. This is frequently attributed to our use of an opt-in registration system.

In fact, defaults may lead to norms occurring for reasons that run counter to the decision-maker’s interests, such as a company choosing the cheapest healthcare plan as a default. Our results suggest that people will still tend to follow such norms.

Conform to good behaviour

Increasingly, social norms are being used to encourage pro-social behaviour.

They have been successfully used to encourage healthy eating, increase attendance at doctor appointments, reduce tax evasion, increase towel reuse at hotels, decrease long-term energy use, and increase organ donor registrations.

Read more:
Sexual subcultures are collateral damage in Tumblr’s ban on adult content

The better we can understand why people conform to social norms, the able we will be to design behavioural change interventions to address the problems facing our society.

The fact that the social norm effect works even for arbitrary norms opens up new and exciting avenues to facilitate behavioural change that were not previously possible.The Conversation

Campbell Pryor, PhD Student in Psychology, University of Melbourne and Piers Howe, Senior Lecturer in Psychology, University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Uncategorized | Leave a comment

New guidelines for responding to cyber attacks don’t go far enough

New guidelines for responding to cyber attacks don’t go far enough


File 20181217 185255 1repzj6.jpg?ixlib=rb 1.1

If Australia’s electricity grid was targeted by cyber attack the fall out could be severe.


Adam Henry, UNSW and Greg Austin, UNSW

Debates about cyber security in Australia over the past few weeks have largely centred around the passing of the government’s controversial Assistance and Access bill. But while government access to encrypted messages is an important subject, protecting Australia from threat could depend more on the task of developing a solid and robust cyber security response plan.

Australia released its first Cyber Incident Management Arrangements (CIMA) for state, territory and federal governments on December 12. It’s a commendable move towards a comprehensive national civil defence strategy for cyber space.

Coming at least a decade after the need was first foreshadowed by the government, this is just the initial step on a path that demands much more development. Beyond CIMA, the government needs to better explain to the public the unique threats posed by large scale cyber incidents and, on that basis, engage the private sector and a wider community of experts on addressing those unique threats.

Read more:
What skills does a cybersecurity professional need?

Australia is poorly prepared

The aim of the new cyber incident arrangements is to reduce the scope, impact and severity of a “national cyber incident”.

A national cyber incident is defined as being of potential national importance, but less severe than a “crisis” that would trigger the government’s Australian Government Crisis Management Framework (AGCMF).

Australia is currently ill-prepared to respond to a major cyber incident, such as the Wannacry or NotPetya attacks in 2017.

Wannacry severely disrupted the UK’s National Health Service, at a cost of A$160 million. NotPetya shut down the world’s largest shipping container company, Maersk, for several weeks, costing it A$500 million.

When costs for random cyber attacks are so high, it’s vital that all Australian governments have coordinated response plans to high-threat incidents. The CIMA sets out inter-jurisdictional coordination arrangements, roles and responsibilities, and principles for cooperation.

A higher-level cyber crisis that would trigger the AGCMF (a process that itself looks somewhat under-prepared) is one that:

… results in sustained disruption to essential services, severe economic damage, a threat to national security or loss of life.

More cyber experts and cyber incident exercises

At just seven pages in length, in glossy brochure format, the CIMA does not outline specific operational incident management protocols.

This will be up to state and territory governments to negotiate with the Commonwealth. That means the protocols developed may be subject to competing budget priorities, political appetite, divergent levels of cyber maturity, and, most importantly, staffing requirements.

Australia has a serious crisis in the availability of skilled cyber personnel in general. This is particularly the case in specialist areas required for the management of complex cyber incidents.

Government agencies struggle to compete with major corporations, such as the major banks, for the top-level recruits.


Australia needs people with expertise in cybersecurity.



The skills crisis is exacerbated by the lack of high quality education and training programs in Australia for this specialist task. Our universities, for the most part, do not teach – or even research – complex cyber incidents on a scale that could begin to service the national need.

Read more:
It’s time for governments to help their citizens deal with cybersecurity

The federal government must move quickly to strengthen and formalise arrangements for collaboration with key non-governmental partners – particularly the business sector, but also researchers and large non-profit entities.

Critical infrastructure providers, such as electricity companies, should be among the first businesses targeted for collaboration due to the scale of potential fallout if they came under attack.

To help achieve this, CIMA outlines plans to institutionalise, for the first time, regular cyber incident exercises that address nationwide needs.

Better long-term planning is needed

While these moves are a good start, there are three longer term tasks that need attention.

First, the government needs to construct a consistent, credible and durable public narrative around the purpose of its cyber incident policies, and associated exercise programs.

Former Cyber Security Minister Dan Tehan has spoken of a single cyber storm, former Prime Minister Malcolm Turnbull spoke of a perfect cyber storm (several storms together), and Cyber Coordinator Alastair McGibbon spoke of a cyber catastrophe as the only existential threat Australia faced.

But there is little articulation in the public domain of what these ideas actually mean.

The new cyber incident management arrangements are meant to operate below the level of national cyber crisis. But the country is in dire need of a civil defence strategy for cyber space that addresses both levels of attack. There is no significant mention of cyber threats in the website of the Australian Disaster Resilience Knowledge Hub.

This is a completely new form of civil defence, and it may need a new form of organisation to carry it forward. A new, dedicated arm of a existing agency, such as the State Emergency Services (SES), is another potential solution.

One of us (Greg Austin) proposed in 2016 the creation of a new “cyber civil corps”. This would be a disciplined service relying on part-time commitments from the people best trained to respond to national cyber emergencies. A cyber civil corps could also help to define training needs and contribute to national training packages.

The second task falls to private business, who face potentially crippling costs in random cyber attacks.

They will need to build their own body of expertise in cyber simulations and exercise. Contracting out such responsibilities to consulting companies, or one-off reports, would produce scattershot results. Any “lessons learnt” within firms about contingency management could fail to be consolidated and shared with the wider business community.

Read more:
The difference between cybersecurity and cybercrime, and why it matters

The third task of all stakeholders is to mobilise an expanding knowledge community led by researchers from academia, government and the private sector.

What exists at the moment is minimalist, and appears hostage to the preferences of a handful of senior officials in Australian Cyber Security Centre (ACSC) and the Department of Home Affairs who may not be in post within several years.

Cyber civil defence is the responsibility of the entire community. Australia needs a national standing committee for cyber security emergency management and resilience that is an equal partnership between government, business, and academic specialists.The Conversation

Adam Henry, Adjunct Lecturer, UNSW and Greg Austin, Professor UNSW Canberra Cyber, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Uncategorized | Leave a comment

Dramatic advances in forensics expose the need for genetic data legislation

Dramatic advances in forensics expose the need for genetic data legislation


File 20181104 83638 v9mo5s.jpg?ixlib=rb 1.1

The issues surrounding the use of genetic data are complex.
image created by James Hereward and Caitlin Curtis


Caitlin Curtis, The University of Queensland; James Hereward, The University of Queensland; John Devereux, The University of Queensland; Karen Hussey, The University of Queensland, and Marie Mangelsdorf, The University of Queensland

Many people first became familiar with DNA testing through its use in the OJ Simpson murder trial in 1994. Now, 24 years later, there have been two dramatic advances in the capability of forensic genetics that mark the start of a new era.

The first is the amount of information we can predict about a person from DNA found at a crime scene, and the second is the way police can use open genealogy databases to identify people.

But we need to be careful how we use these new tools. If people lose trust in how DNA data is used and shared by police, it could have an adverse impact on other applications – such as medical care.

That’s why we’re calling for a Genetic Data Protection Act to ensure people have confidence in the way their DNA is accessed and used.

Read more:
DNA facial prediction could make protecting your privacy more difficult

We can learn a lot more from DNA now

Predicting traits from DNA, known as “DNA phenotyping”, is improving. Facial prediction, health traits, predisposition to disease, even personality traits and things about our mental health can be predicted from genetic data. Some researchers are even considering predicting propensity to drink or smoke.


We’re getting better at predicting physical traits, like faces, from DNA data.
Composite from PNAS


Law enforcement agencies around the world are using these traits to create predictive DNA “mugshots”, but in many countries there is no specific regulation on how and when they should be incorporated into policing.

And some types of predictions raise considerable ethical issues.

For example, should it be OK for law enforcement to predict the mental health or disease risk of a suspect? If so, should that information be used in a trial? If law enforcement predicts a high risk of a particular disease, should they be compelled to tell a suspect or their family?

Separation between databases is breaking down

You may be familiar with “CODIS” from CSI, this is the database that law enforcement has traditionally used to identify DNA collected at a crime scene. CODIS has around 17.7 million DNA profiles. There are strict rules around who can be included in these databases, and the vast majority of profiles are from convicted offenders.

According to best estimates, the number of people who have taken genetic ancestry tests is slightly higher than this, and police have started using this data as well. The type of data in CODIS only allows close family matches, but the type of data in open ancestry databases allows much deeper relations to be found.

Even if you haven’t participated in genetic testing or made your genetic data public, you may have a relative who has. Currently, law enforcement is able to identify people based on matches as distant as third cousins.

On average, people have around 190 third cousins. One estimate indicates that over 90% of Americans of European descent already have a third cousin or higher in the open genealogy database GEDmatch. It may take as little as 2% of the population uploading their DNA data in a genealogy database for the entire population to be identified this way.


The 238 relatives in your generation that might be affected if you share your genetic data.
image designed by James Hereward and Caitlin Curtis


New statistical methods mean separations between previously distinct genetic databases are disappearing. Traditional forensic markers can now be cross referenced to ancestry data, even though they are completely different types of genetic data. This means close family members could be identified across different databases. These methods can also be used to re-identify subjects in medical genetics research projects.

There has been a lot of public support for the use of genetic genealogy to catch serial killers and rapists. In some cases, people are voluntarily uploading their data to help these efforts.

But where should we draw the line? Should genetic data only be used in serious crimes, or are we happy to have a comprehensive system of genetic surveillance that covers the entire population?

Private companies are aiding law enforcement

Both DNA phenotyping and forensic genealogy – which relies on amateur genealogists – are now being offered to law enforcement by private companies.

Parabon, a US-based pharmaceutical company, has partnered with armchair genealogist Cece Moore. She started using genetic genealogy to find the parents of adoptees and children born through sperm donation, but now uses it to catch criminals.

Parabon also offers facial prediction services. While the science of facial prediction from DNA is getting better, it is still contentious, and several prominent scientists have cast doubt on whether Parabon can really do what it is promising.

Nevertheless, this move out of government labs and into private ones raises questions about oversight – and what exactly is happening to the data generated.

Genetic data is different from other kinds of data

Genetic data is highly unique and can be thought of as a personal 15 million letter pin-code. Since the code doesn’t just identify us, it also contains important information about our disease risk, personality traits and even our physical features like our face, it is very difficult to keep anonymous.


Genetic data is different from other kinds of data.
Edited from Shutterstock image


Unlike a credit card we can’t request a new genome if our data is compromised. And a stolen credit card won’t tell a perpetrator anything about the finances of our family members.

We understand what happens if we lose a credit card, but our understanding of genetic data is still developing. And we’re likely to see it put to unexpected uses in the future.

Read more:
It’s time to talk about who can access your digital genomic data

We need a ‘Genetic Data Protection Act’

Technological advances in genomics are outpacing public awareness, and existing legislation doesn’t fit genetic data well. Under current laws, the lab that produces the genetic data has ownership of the record. But if our genetic data represents a deep part of the essence of us, it shouldn’t be this easy for us to give up ownership of it.

We need new ways to protect genetic data to maintain trust in medical genomics. Sometimes people need their genome sequenced for medical purposes, but they might be reluctant to consent if trust has broken down around how genetic data could be used. That could result in poorer medical outcomes.

One solution to prevent this is a specific “Genetic Data Protection Act”, which would grant people ownership of their own data. However, it must be different from standard property rights: ownership should be immutable and nontransferable.

The issues around use of our genetic data are complex, individuals (and their descendants) must be protected. Under no circumstances should it be possible for an individual to unwittingly sign an agreement that results in a loss of control of their genetic data. Legislation is part of the solution, but education and new technological solutions will also be important.

The recent introduction of the digital My Health Record shows that Australians care about who is accessing their sensitive information. And people are already expressing unease about the confidentiality of their genetic data.

We must establish clear boundaries about how genetic data generated for medical purposes is used – whether by police or by any other interested parties. Giving genetic data the protection it needs, and making sure that medical genetic data doesn’t become a forensic resource will be crucial to ensure public trust in medical genetics.The Conversation

Caitlin Curtis, Research fellow, Centre for Policy Futures (Genomics), The University of Queensland; James Hereward, Research fellow, The University of Queensland; John Devereux, Professor of Law, The University of Queensland; Karen Hussey, Director, Centre for Policy Futures, The University of Queensland, and Marie Mangelsdorf, Research Fellow (Genomics), The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Posted in Uncategorized | Leave a comment