Combining the facial recognition decisions of humans and computers can prevent costly mistakes

Combining the facial recognition decisions of humans and computers can prevent costly mistakes


File 20180604 177134 149ocx3.jpg?ixlib=rb 1.1

Students tested on their ability to tell whether two images were of the same person were wrong 30% of the time.


David White, UNSW

After a series of bank robberies that took place in the US in 2014, police arrested Steve Talley. He was beaten during the arrest and held in maximum security detention for almost two months. His estranged ex-wife identified him as the robber in CCTV footage and an FBI facial examiner later backed up her claims.

It turned out Talley was not the perpetrator. Unfortunately, his arrest left him with extensive injuries, and led to him losing his job and a period of homelessness. Talley has now become an example of what can go wrong with facial identification.

These critical decisions rest on the ability of humans and computers to decide whether two images are of the same person or different people. Talley’s case shows how errors can have profound consequences.

My research focuses on how to improve the accuracy of these decisions. This can make society safer by protecting against terrorism, organised crime and identity fraud. And make them fairer by ensuring that errors in these decisions do not lead to people being wrongly accused of crimes.

Read more:
DNA facial prediction could make protecting your privacy more difficult

Identifying unfamiliar faces

So just how accurate are humans and computers at identifying faces?

Most people are extremely good at recognising faces of people they know well. However, in all of the critical decisions outlined above, the task is not to identify a familiar face, but rather to verify the identity of an unfamiliar face.

To understand just how challenging this task can be, try it for your self: are the images below of the same person or different people?


Same or different person? The correct answer is provided at the end of this article.



Humans versus machines

The above image pair is one of the test items my colleagues and I used to evaluate the accuracy of humans and computers in identifying faces, in a paper published last week in Proceedings of the National Academy of Science.

We recruited two groups of professional facial identification experts. One group were international experts that produce forensic analysis reports for court (Examiners). Another group were face identification specialists that made quicker decisions, for example when reviewing the validity of visa applications or in forensic investigation (Reviewers). We also recruited a group of “super-recognisers” who have a natural ability to identify faces, similar to groups that have been deployed as face identification specialists in the London Metropolitan Police.

Performance of these groups compared to undergraduate students and to the algorithms is shown in the graph below.


Accuracy of participant groups and face recognition algorithms in Phillips et al (2018).


Black dots on this graph show the accuracy of individual participants, and the red dots show the average performance of the group.

The first thing to notice is that there is a clear ordering of performance across the groups of humans. Students perform relatively poorly as a group – with over 30% errors on average – showing just how challenging the task is.

The professional groups fare far better on the task, making less than 10% errors on average and nine out of 87 attaining the maximum possible score on the test.

Interestingly, the super-recognisers also performed extremely well, with three out of 12 attaining the maximum possible score. These people had no specialist training or experience in performing face identification decisions, suggesting that selecting people based on natural ability is also a promising solution.

Read more:
Class action against Facebook over facial recognition could pave the way for further lawsuits

Performance of the algorithms is shown by the red dots on the right of the graph. We tested three iterations of the same algorithm as the algorithm was improved over the last two years. There is a clear improvement of this algorithm with each iteration, demonstrating the major advances that Deep Convolutional Neural Network technology have made over the past few years.

The most recent version of the algorithm attained accuracy that was in the range of the very best humans.

The wisdom of crowds

We also observed large variability in all groups. No matter which group we look at, performance of individuals spans the entire measurement scale – from random guessing (50%) to perfect accuracy (100%).

This variation is problematic, because it is individuals that provide face identification evidence in court. If performance varies so wildly from one individual to the next, how can we know that their decisions are accurate?

Our study provides a solution to this problem. By averaging the responses of groups of humans, using what is known as a “wisdom of crowds” approach, we were able to attain near-perfect levels of accuracy. Group performance was also more predictable than individual accuracy.

Perhaps the most interesting finding was when we combined the decisions of humans and machines.

By combining the responses of just one examiner and the leading algorithm, we were able to attain perfect accuracy on this test – better than either a single examiner or the best algorithm working alone.

Face recognition in Australia

This is a timely result as Australia rolls out the National Face Identification scheme, which will enable police agencies to search large databases of images using face recognition software.

Read more:
Close up: the government’s facial recognition plan could reveal more than just your identity

Importantly, this application of face recognition technology is not automatic – like automated border control systems are. Rather, the technology generates “candidate lists” like the one shown below. For the systems to be of any use, humans must review these candidate lists to decide if the target identity is present.


A ‘candidate list’ returned by face recognition software performing a database search. Humans must adjudicate the output of these systems by deciding whether the person in the ‘probe’ image – the image at the top – is pictured in the array below, and if so to select the matching face. The correct answer is provided at the end of this article.



In a 2015 study my colleagues and I found that the average person makes errors on one in every two decisions when reviewing candidate lists, and chooses the wrong person 40% of the time!

False positives like these can waste precious police time, and have potentially devastating effect on people’s lives.

The study we published this week suggests that protecting against these costly errors requires careful consideration of both human and machine components of face recognition systems.

The ConversationCorrect answers: The pair of images are different people. The matching image in the candidate list is top row, second from left.

David White, Scientia Fellow, UNSW

This article was originally published on The Conversation. Read the original article.

Posted in Uncategorized | Leave a comment

Looks aren’t so deceiving: AI could predict your next move from watching your eye gaze

Looks aren’t so deceiving: AI could predict your next move from watching your eye gaze


File 20180608 191962 91tyym.jpg?ixlib=rb 1.1

It is possible to buy accurate and robust eye trackers for as little as A$125.


Eduardo Velloso, University of Melbourne and Tim Miller, University of Melbourne

Our eyes often betray our intentions. Think of poker players hiding their “tells” behind sunglasses or goalkeepers monitoring the gaze of the striker to predict where they’ll shoot.

In sports, board games, and card games, players can see each other, which creates an additional layer of social gameplay based on gaze, body language and other nonverbal signals.

Digital games completely lack these signals. Even when we play against others, there are few means of conveying implicit information without words.

Read more:
What eye tracking tells us about the way we watch films

However, the recent increase in the availability of commercial eye trackers may change this. Eye trackers use an infra-red camera and infra-red LEDs to estimate where the user is looking on the screen. Nowadays, it is possible to buy accurate and robust eye trackers for as little as A$125.

Eye tracking for gaming

Eye trackers are also sold built into laptops and VR headsets, opening up many opportunities for incorporating eye tracking into video games. In a recent review article, we offered a catalogue of the wide range of game mechanics made possible by eye tracking.

This paved the way for us to investigate how social signals emitted by our eyes can be incorporated into games against other players and artificial intelligence.

To explore this, we used the digital version of the board game Ticket to Ride. In the game, players must build tracks between specific cities on the board. However, because opponents might block your way, you must do your best to keep your intentions hidden.


Our studies using Ticket to Ride to explore the roles of social gaze in online gameplay.


In a tabletop setting, if you are not careful, your opponent might figure out your plan based on how you look at the board. For example, imagine that your goal is to build a route between Santa Fe and Seattle. Our natural tendency is to look back and forth between those cities, considering alternative routes and the resources that you have in the cards in your hands.

Read more:
A sixth sense? How we can tell that eyes are watching us

In our recent paper, we found that when humans can see where their opponents are looking, they can infer some of their goals – but only if that opponent does not know that their eyes are being monitored. Otherwise, they start employing different strategies to try to deceive their opponent, including looking at a decoy route or looking all over the board.

Can AI use this information?

We wanted to see if a game AI could use this information to better predict the future moves of other players, building upon previous models of intention recognition in AI.

Most game AIs use the player’s actions to predict what they may do next. For example, in the figure below on the left, imagine a player is claiming routes to go from Sante Fe to some unknown destination on the map. The AI’s task is to determine which city is the destination.

When at Santa Fe, all of the possible destinations are equally likely. After getting to Denver, it becomes less likely that they want to go to Oklahoma City, because they could have taken a much more direct route. If they then travel from Denver to Helena, then Salt Lake city becomes much less likely, and Oklahoma City even less.


Left: without gaze information, it is difficult to tell where your opponent is going next. Right: by determining that your opponent keeps looking at Helena and Seattle, the AI can make better predictions of the routes the opponent might take.



In our model, we augmented this basic model to also consider where this player is looking.

The idea is simple: if the player is looking at a certain route, the more likely the player will try to claim that route. As an example, consider the right side of the figure. After going to Denver, our eye-tracking system knows that the player has been looking at the route between Seattle and Helena, while ignoring other parts of the map. This tells us that it is more likely that they take this route and end up in Seattle.

Our AI increases the relative likelihood of this action, while decreasing others. As such, its prediction is that the next move will be to Helena, rather than to Salt Lake City. You can read more about the specifics in our paper.


We evaluated how well our AI could predict the next move in 20 Ticket To Ride two-player games. We measured the accuracy of our predictions and how early in the game they could be made.

Read more:
Eye tracking is the next frontier of human-computer interaction

The results show that the basic model of intention recognition correctly predicted the next move 23% of the time. However, when we added gaze to the mix, the accuracy more than doubled, increasing to 55%.

Further, the gaze model was able to predict the correct destination city earlier than the basic model, with the AI that used gaze recognising intentions a minute and a half earlier than the one without gaze. These results demonstrate that using gaze can be used to predict action much better and faster than just using past actions alone.

Recent unpublished results show that the gaze model also works if the person being observed knows that they are being observed. We have found that the deception strategies that players employ to make it more difficult for other players to determine their intentions do not fool AIs as well as they fool humans.

Where to next?

This idea can be applied in contexts other than games. For example, collaborative assembly between robots and humans in a factory.

The ConversationIn these scenarios, a person’s gaze will naturally lead to earlier and more accurate prediction by the robot, potentially increasing safety and leading to better coordination.

Eduardo Velloso, Lecturer in Human-Computer Interaction, ARC DECRA Fellow, University of Melbourne and Tim Miller, Associate Professor of Computer Science, University of Melbourne

This article was originally published on The Conversation. Read the original article.

Posted in Uncategorized | Leave a comment

Why electronic surveillance monitoring may not reduce youth crime

Why electronic surveillance monitoring may not reduce youth crime


File 20180613 153660 1p5c3e8.jpg?ixlib=rb 1.1

Victoria is introducing legislation to require young criminal offenders to wear electronic tracking devices.


Darren Palmer, Deakin University

Last week, the Victorian government announced a new surveillance monitoring scheme directed at young criminal offenders aged 16 and older.

Under the legislation to be introduced later this year, the Youth Parole Board will be given the power to decide if offenders should be required to wear an electronic monitoring device and undergo regular drug and alcohol testing after serving their sentences.

While elements of this proposal would be new for Australia, various jurisdictions have used electronic monitoring over the years. Matt Black and Russell Smith reviewed the use of electronic monitoring schemes across the country in 2003. Western Australia introduced tracking devices for young people in 2004.

Read more:
GPS monitoring may intrude on prisoners’ privacy

Across Australia, intensive surveillance systems are increasingly being seen as a way to manage risk. New South Wales is currently testing in-vehicle telematics surveillance apps for all young drivers (18- to 25-year-olds), who are deemed at higher risk of accidents or committing driving offences.

Families and Children Minister Jenny Mikakos claims the Victorian monitoring measure is needed to ensure high-risk young offenders comply with their parole conditions. The scheme could be expanded if it proves successful.

Lack of evidence and exorbitant costs

While there is little Australian research into the efficacy of electronic monitoring of young people (or post-release offenders generally), the Jill Dando Institute in the UK has conducted a recent systematic review of research in various countries around the world.

Some aspects of the Victorian proposal align with the international evidence on likelihood of success. The review shows electronic monitoring can increase the likelihood of repeat offenders being caught, serve as a constant reminder to offenders of their parole status and conditions, and reduce peer pressure by limiting access to the people and places that might contribute to repeat offending.

In addition, the review found that several behavioural changes brought by electronic monitoring might contribute to a reduction in crime. These include offenders being able to remain at home with family support (rather than being incarcerated), participate in treatment programs, abstain from drug and alcohol use, and even secure a job and regular source of income.

Read more:
Why police in schools won’t reduce youth crime in Victoria

However, the review found that electronic monitoring works best with just one category of offenders: sex offenders. When extended to broader “high-risk” offenders of all ages, there was no significant positive effect compared to non-monitoring.

The review also highlights the crucial importance of getting the implementation right. At this stage, little is known about how the Victorian electronic monitoring proposal would be implemented.

The right technology is vital. So, too, is the need to ensure strong data management and integration, which is problematic in Victoria. There also needs to be strong communication between a number of relevant agencies (an issue in Victoria), and detailed planning and program administration protocols prior to implementation (unknown at this stage).

The final issue is financial. The Victorian government has indicated an investment of A$2.1 million for an estimated 20 to 30 people in the initial trial phase of the program. This means at least $70,000 per person at the outset.

No doubt some of this money will be allocated to set-up costs for the monitoring system and wouldn’t need to be spent again in the future. But it’s still a considerable expense, and raises questions about whether the money could be better spent on other youth offender initiatives, such as drug/alcohol treatment, training and employment programs.

Election-year politics

This “get tough” approach to young criminal offenders comes during an election year in Victoria, when “law and order” issues tend to dominate debate. But evidence-based research of what does and does not work is being pushed aside in this case. So, too, are the negative effects that can arise from these policies.

Rather than focus on which party is toughest on crime, a more progressive approach on “law and order” issues is needed. A permanent mechanism for reviewing criminal justice policies and procedures is one idea. I’d suggest an independent Criminal Justice Commission that evaluates policy initiatives in the run-up to each election and conducts five-year reviews of criminal justice policies.

Read more:
Tough on crime: Victoria is not learning lessons from abroad

Too bureaucratic? Too academic? Australia already has a similar system for evaluating economic policy (the pre-election budget analysis) and five-year defence strategic reviews.

To its credit, Victoria Police tried to implement something like this with its “Blue Paper”, but that was quietly shelved. In any case, we need a systematic review of the criminal justice system rather than an agency review.

We can only hope that between now and the November state election there will be some effort to develop progressive criminal justice policies directed at holistic crime prevention rather than a focus on more intensive surveillance.

The ConversationThe idea that more surveillance can solve recidivism is misguided. It might be better at catching breaches of parole, but for what purpose? Certainly not for helping a young offender understand the effects of their behaviours, the harms they have caused, and the need to find assistance and a path to a different future.

Darren Palmer, Associate professor, Deakin University

This article was originally published on The Conversation. Read the original article.

Posted in Uncategorized | Leave a comment

Criminals can’t easily edit their DNA out of forensic databases

Criminals can’t easily edit their DNA out of forensic databases


File 20180511 34021 smzq3t.jpg?ixlib=rb 1.1

You’re knicked – and so is your DNA.


Caitlin Curtis, The University of Queensland and James Hereward, The University of Queensland

There have been a number of news articles over the last week or so reporting that to avoid being matched to criminal forensic databases, criminals could edit their genomes using cheap, online kits.

What seems to be at the centre of these articles, and giving them a sense of credibility, are some quotes from George Church – a highly respected geneticist from Harvard.

Asked if CRISPR could alter DNA to the extent it would make forensic evidence unusable, Church reportedly told The Telegraph:

We could do that today, easily. A lot of it is done by blood and even if you just get a stem cell transplant you have a new identity.

I could imagine there being an industry.

But is it really so easy? From our perspective there may be some confusion around what is feasible, and what is actually happening now. Let’s unpack some of the issues and think about what would be required to pull off such a feat.

Read more:
From the crime scene to the courtroom: the journey of a DNA sample

Evading forensic databases

The mainstay of modern DNA identification is short tandem repeat (STR) markers, which are small sections of DNA that vary by length (the number of repeats). Multiple STR markers are used to create a DNA profile.

Most systems now use a panel of 24 DNA markers, but some will allow partial matches of as few as eight or nine markers. It might be possible, in theory, to cheat the system by changing only one of these markers, but in practice a hypothetical DNA-edited criminal would probably want to change several of them.

STR markers are located in the more variable parts of our genome and this may make them more difficult to accurately target with gene editing tools. The easiest way to change your STR profile would probably be to delete some DNA and make the length of that marker shorter.

Technology for reading DNA is getting better, and DNA forensics is currently moving from STR markers to systems that look at more of our DNA and can tell us much more about someone.

Read more:
DNA facial prediction could make protecting your privacy more difficult

In the recent Golden State killer case, so-called “SNP chips” – that measure around 600,000 sites in our genome – were used to make matches to genealogy databases. DNA forensics is a moving field and a future criminal may have to edit much more of their DNA to evade this sort of matching.

But how much of your body would you need to change to avoid detection? Is it just the cells that are used for sampling – for example your cheek cells, your blood cells – or every cell in your body?

As George Church seems to point out, in theory a genetic manipulation to your blood (or another targeted area) could allow a criminal to be excluded as a suspect. In the Golden State killer case, police used “discarded” DNA from the suspect’s trash. To fully evade DNA forensics you would therefore likely have to make much more extensive changes (i.e. skin, semen, hair, blood, cheek cells).

Let’s look at the techniques that might be used by someone wanting to alter their DNA.

Editing genes with CRISPR

CRISPR or CRISPR/Cas9, is a method for making precise edits to a genome.

For CRISPR to work it has to be delivered into cells. There are a number of ways to do this, but no one has published an effective way to change all of the cells in your body. Doing so is currently a formidable challenge.

It’s difficult to know exactly where we are with CRISPR in humans. There have been reports that the human immune system may attack the Cas9 enzyme required for CRISPR to work.

Human trials involving CRISPR are only just starting in western countries. China has conducted tests, but most of these involve removing immune cells, editing them and putting them back.

Read more:
What is CRISPR gene editing, and how does it work?

There are reports that CRISPR doesn’t always modify all cells, and if criminals actually start using these kinds of techniques then law enforcement is going to be more alert to “mixed signal” samples.

Dangers of biohacking

The CRISPR Kit linked to in a Daily Mail article is from ODIN, a company that is part of the “DIY Bio” movement. The specific kit mentioned is designed to let someone edit bacteria.

The CEO of ODIN – Josiah Zayner – has, however, previously injected himself with CRISPR DNA that would enhance his muscles. At best this stunt is unlikely to work, and at worst could be quite damaging.

Community involvement in biology isn’t a bad thing, but modifying your own genome using CRISPR really isn’t something you should be doing at home on yourself. We don’t yet fully understand how this gene editing technology might affect other parts of our genome.

The US Food and Drug administration has highlighted that DIY gene therapy is illegal and risky. The legality may not concern a criminal, but the potential for off-target effects should.

Stem cell replacement

Another way to change your genetic code is stem cell replacement. This has a precedent with some people that have had stem cell or bone marrow transplants.

Studies have looked at the DNA in cells of people who have received donor stem cells.

They report both donor and recipient DNA – this is known as “chimerism”, two different genomes – from many types of tissue and fluid, including mouthwash, oral swabs, and fingernails, sometimes years after stem cell transplants have taken place.

Hair follicles were thought to be unaffected by chimerism, but the genetic material from the Y chromosome of male donors has been detected in hair follicles of female recipients in at least two studies, suggesting that bone marrow replacement could affect much more of your body than originally thought.

So varying your DNA through stem cells is feasible, but as noted by Church:

CRISPR actually would be easier than a stem cell transplant because (a transplant) would have to be done sterilely and you would need to irradiate yourself to get rid of the old ones.

Changing out all of your bone marrow would be an extreme medical procedure.

What can we learn from this?

Changing your DNA profile to evade criminal databases is technically possible but it seems highly unlikely that criminals are actually doing this now. It probably wouldn’t even be effective with a DIY-bio kit. If any criminals are inspired to try and CRISPR themselves we would strongly recommend that they don’t.

George Church may have been speaking about what is possible in a somewhat hypothetical sense, and his quotes may have been taken out of context in some media coverage.

The ConversationSensationalised or not, this story is a useful thought exercise that reminds us how the world as we know it could change as the code of life starts to become re-writable.

Caitlin Curtis, Research fellow, Centre for Policy Futures (Genomics), The University of Queensland and James Hereward, Research fellow, The University of Queensland

This article was originally published on The Conversation. Read the original article.

Posted in Uncategorized | Leave a comment

94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour

94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour


File 20180514 34038 10eli61.jpg?ixlib=rb 1.1

It would take the average person 244 hours per year (6 working weeks) to read all privacy policies that apply to them.


Katharine Kemp, UNSW

Australians are agreeing to privacy policies they are not comfortable with and would like companies only to collect data that is essential for the delivery of their service. That’s according to new, nation-wide research on consumer attitudes to privacy policies released by the Consumer Policy Research Centre (CPRC) today.

These findings are particularly important since the government’s announcement last week that it plans to implement “open banking” (which gives consumers better access to and control over their banking data) as the first stage of the proposed “consumer data right” from July 2019.

Read more:
How not to agree to clean public toilets when you accept any online terms and conditions

Consumer advocates argue that existing privacy regulation in Australia needs to be strengthened before this new regime is implemented. In many cases, they say, consumers are not truly providing their “informed consent” to current uses of their personal information.

While some blame consumers for failing to read privacy policies, I argue that not reading is often rational behaviour under the current consent model. We need improved standards for consent under our Privacy Act as a first step in improving data protection.

Australians are not reading privacy policies

Under the Privacy Act, in many cases, the collection, use or disclosure of personal information is justified by the individual’s consent. This is consistent with the “notice and choice” model for privacy regulation: we receive notice of the proposed treatment of our information and we have a choice about whether to accept.

But according to the CPRC Report, most Australians (94%) do not read all privacy policies that apply to them. While some suggest this is because we don’t care about our privacy, there are four good reasons why people who do care about their privacy don’t read all privacy policies.

We don’t have enough time

There are many privacy policies that apply to each of us and most are lengthy. But could we read them all if we cared enough?

According to international research, it would take the average person 244 hours per year (six working weeks) to read all privacy policies that apply to them, not including the time it would take to check websites for changes to these policies. This would be an impossible task for most working adults.

Under our current law, if you don’t have time to read the thousands of words in the policy, your consent can be implied by your continued use of the website which provides a link to that policy.

We can’t understand them

According to the CPRC, one of the reasons users typically do not read policies is that they are difficult to comprehend.

Very often these policies lead with feel-good assurances “We care about your privacy”, and leave more concerning matters to be discovered later in vague, open-ended terms, such as:

…we may collect your personal information for research, marketing, for efficiency purposes…

In fact, the CPRC Report states around one in five Australians:

…wrongly believed that if a company had a Privacy Policy, it meant they would not share information with other websites or companies.

Read more:
Consent and ethics in Facebook’s emotional manipulation study

We can’t negotiate for better terms

We generally have no ability to negotiate about how much of our data the company will collect, and how it will use and disclose it.

According to the CPRC Report, most Australians want companies only to collect data that is essential for the delivery of their service (91%) and want options to opt out of data collection (95%).

However, our law allows companies to group into one consent various types and uses of our data. Some are essential to providing the service, such as your name and address for delivery, and some are not, such as disclosing your details to “business partners” for marketing research.

These terms are often presented in standard form, on a take-it-or-leave-it basis. You either consent to everything or refrain from using the service.

We can’t avoid the service altogether

According to the CPRC, over two thirds of Australians say they have agreed to privacy terms with which they are not comfortable, most often because it is the only way to access the product or service in question.

In a 2017 report, the Productivity Commission expressed the view that:

… even in sectors where there are dominant firms, such as social media, consumers can choose whether or not to use the class of product or service at all, without adversely affecting their quality of life.

However, in many cases, we cannot simply walk away if we don’t like the privacy terms.

Schools, for example, may decide what apps parents must use to communicate about their children. Many jobs require people to have Facebook or other social media accounts. Lack of transparency and competition in privacy terms also means there is often little to choose between rival providers.

We need higher standards for consent

There is frequently no real notice and no real choice in how our personal data is used by companies.

The EU General Data Protection Regulation (GDPR), which comes into effect on 25 May 2018, provides one model for improved consent. Under the GDPR, consent:

… should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement.

Read more:
You may be sick of worrying about online privacy, but ‘surveillance apathy’ is also a problem

The Privacy Act should be amended along these lines to set higher standards for consent, including that consent should be:

  • explicit and require action on the part of the customer – consent should not be implied by the mere use of a website or service and there should be no pre-ticked boxes. Privacy should be the default;
  • unbundled – individuals should be able to choose to consent only to the collection and use of data essential to the delivery of the service, with separate choices of whether to consent to additional collections and uses;
  • revocable – the individual should have the option to withdraw their consent in respect of future uses of their personal data at any time.

The ConversationWhile further improvements are needed, upgrading our standards for consent would be an important first step.

Katharine Kemp, Lecturer, Faculty of Law, UNSW, and Co-Leader, ‘Data as a Source of Market Power’ Research Stream of The Allens Hub for Technology, Law and Innovation, UNSW

This article was originally published on The Conversation. Read the original article.

Posted in Uncategorized | Leave a comment