How to spot dodgy CVs

How to spot dodgy CVs

NINA HENDY / Thursday, May 14, 2015

How to spot dodgy CVs

It can cost a business tens of thousands to hire a new employee. But what if their CV is full of lies? 

A new recruit for your business is a major expense. Which is why it’s paramount you do all the checks and balances you can to make sure that the CVs sitting on your desk aren’t full of lies.

A survey of more than 23,000 businesses by career matching site OneShift last year found that more than 56% had experienced staff lying on their CV. The seven most common things people lie about are dates of employment, job titles, skills and accomplishments, salary, education, responsibilities, and using family and friends for references.

Another recent survey by Talent2 found that more than two thirds of employers (67.2%) have come across job candidates who have lied on their CV, indicating that the act of lying on a resume is likely to be a much more widespread issue.

And while lying on your CV is not a criminal offence, government employers are definitely cracking down on the practice. In a UK case, an employee received a six month suspended prison sentence and was ordered to carry out 150 hours of community work for lying on her CV.

The public service is cracking down on liars, with a recent incident in which a public servant was convicted of fraud for lying on their CV.

According to newspaper reports, the Australian Tax Office told its employees last month that one of their co-workers had lost their job and been found guilty of fraud after lying to secure a job.

When confronted, the employee resigned and the matter was referred to the Commonwealth Director of Public Prosecutions. The employee later pleaded guilty to charges of knowingly using a false document and dishonestly deceiving a public official and was convicted and fine $1000 on each count.

It comes after a clause was added to the Public Service Code of Conduct in 2013 to increase bosses’ power to take action against public servants who had been proven to have lied in their job applications.

Commissioner of Taxation Chris Jordan wrote in a staff bulletin about the issue that: “Falsifying your work qualifications or work history is deceitful, and chances are you will be caught out eventually, especially if you’re hired for skills and attributes that you simply don’t have. Don’t let ambition blind you to the requirement that you act with honesty and integrity at all times.”

But the problem isn’t just one for the public servant sector.

Tudor Marsden-Huggins, managing director of recruitment marketing specialists Employment Office, says it’s a common problem, especially for employers who either don’t have the time or the know-how to undergo a rigorous screening process.

“We’re regularly seeing a steady rise in the number of employers requesting rigorous pre-employment screening, vetting and background checks. Employers are definitely becoming more cautious about whom they hire, and with approximately a quarter of candidates putting false or misleading information in their applications, they have good reason to be wary.”

It can cost a company anything from several thousands to tens of thousands of dollars to hire a new employee. For high level positions, this can swell to over $50,000. An initial investment in pre-employment checks can save employers from wasting their money, in both the initial hire and to hire a new candidate where the person’s deceit is uncovered.

“What candidates have to realise is that sooner or later, false information provided at the application stage will be tested against your practical skills and knowledge. If an employer discovers you have lied to get the job, even months or years into the relationship, your employment can be terminated immediately,” Marsden-Huggins says.

Employees are at greater risk than ever before of being caught out in the online age, when key contacts are so easily accessible, he says.

Leanne Hagerty is the head of people management with management consultancy Be Business. She’s found several cases where a candidate is about to be offered the job but at the last minute they’re caught in a lie.

The problem is that if someone lies on their CV or during an interview to cover up a past failure or under-performance, it’s not unreasonable for the potential employer to suspect they may do the same thing while they’re employed at your company, she says.

“Usually one of their references let slip a piece of information that’s contrary to the story we’ve been told, or it comes out in the background, security or police checks we routinely conduct. At the senior level, it’s unusual for it to come out during the interview itself unless the candidate is particularly foolish.”

Sell yourself on your future focus and potential, she says.

“Don’t assume you won’t be found out if you lie. Maybe you’ll get lucky but today that’s pretty unlikely, and if it’s a job you really want, then it’s definitely not worth risking your chances with a lie that will probably be found out anyway.”

How to protect your business from false job applications

  • Always make sure you verbally check more than one reference
  • Don’t take a CV on face value – speak through their work history and clarify dates and job titles during the interview process
  • Implement police checks and other security checks for each new employee
  • Call previous employers and clarify all details of their role, including salary bracket
  • Consider hiring a HR specialist during key recruitment periods

Original article found HERE at

Posted in Uncategorized | Leave a comment

Big steal: Fraud, mistakes, shoplifting and employee theft cost Australian retailers $2.7 billion a year

Big steal: Fraud, mistakes, shoplifting and employee theft cost Australian retailers $2.7 billion a year

RENEE THOMPSON / Thursday, November 5, 2015

Big steal: Fraud, mistakes, shoplifting and employee theft cost Australian retailers $2.7 billion a year

Employee theft is a major contributor to Australian retailers losing an estimated $2.7 billion a year through criminal activities and negligence.

Released today, the Global Retail Theft Barometer for 2014-15 looks at the causes of “retail shrink” – the losses incurred by retailers because of internal theft, shoplifting, administration errors or supplier fraud – in 24 countries, based on in-depth phone and written survey interviews conducted in more than 200 retailers.

The report found shoplifting by customers is the biggest cause of retail shrink in 18 of the 24 countries surveyed.

In Australia, shoplifting was found to be the biggest cause of retail shrink for the 2014-15 financial year, at 39%, while employee theft is second at 25%.

Checkpoint Systems Asia Pacific (Australia/NZ) part-commissioned the study and sales vice president Mark Gentle said in a statement the report found the primary reasons for employee theft in Australia are weak pre-employment screening procedures, reduced associate supervision, increasing the part-time workforce and the easy sale of stolen merchandise.

But Gentle said Australian retailers are working to combat the problem.

“To combat shrink, retailers are adopting strategies to approach losses from a wider perspective from all levels within the organisation and work with their supplier and solutions partners,” he says.

“With the right technologies, people and processes, they can achieve improved merchandise availability, which directly impacts shoppers’ satisfaction and retailers’ profitability.”

Brett Warfield, chief executive of Warfield and Associates, told SmartCompany this morning in terms of overall retail shrinkage, employee theft and shoplifting continue to represent the biggest concerns for employers.

Warfield says the report’s findings regarding employee theft are in line with the types of fraud he often encounters, with workers in a position to exploit known weaknesses.

“I think it’s pretty consistent with what we’ve seen as well, it’s employees on the ground who are the ones with access to systems and employees that know weaknesses in systems,” he says.

Warfield says to combat shoplifting, retailers needed to deploy a range of deterrents, including electronic tagging and CCTV to offset a decline in staff members.

“There’s a trend that staff numbers have diminished in retail, there are less staff looking after customers which increases the risk of shoplifting,” he says.

“If you don’t have staff to serve and monitor at the same time it increases the chances of shoplifting occurring.”

One of the big things retailers can do is conduct regular stocktakes, Warfield says, while also putting in place systems that alert you to things such as refund and void fraud.

“If you’re not looking at refunds and voids on daily basis, you’re leaving yourselves exposed to employee theft,” he says.

In terms of monitoring stock, it is also about educating and training staff to know what to look for, Warfield says.

“If the majority of people understand what to do and the warning signs of potential employee theft they are more likely to whistle blow,” he says.

Warfield says the biggest issue for small businesses is bringing on people who “the owner is totally relying on the ethics of”.

“Employment screening needs to be effective. If the employee is rogue, it can damage the business significantly,” he says.

“It’s even more important for small business to do background checks on people and monitor them in first few months.”

Sylvain Mansotte, chief executive of Fraudsec, told SmartCompany this morning a useful tool for understanding fraud among workers is to consider the Fraud Triangle Framework, which was created by a criminologist in the 1950s.

It considers a person’s opportunities, rationale and motivation as three key aspects when identifying fraud.

“Retailers can control opportunity and put systems in place to prevent people from stealing,” he says.

“They can also identify red flags with staff who potentially could go over the edge.”

Training of staff and access to safe ways of blowing the whistle is another key concern for Mansotte.

“You have to train them and give them means to report,” he says.

“It’s hard to dob in your mate and it’s best to use an anonymous reporting channel – a mechanism where people can report the theft anonymously.”

Original article found HERE at

Posted in Uncategorized | Leave a comment

How silent signals from your phone could be recording and tracking you

How silent signals from your phone could be recording and tracking you


File 20180423 94149 i384la.jpg?ixlib=rb 1.1
Advertisers may track a customer’s shopping preferences within a shopping centre by using ultrasonic beacons emitted from their mobile phones.
Mai Lam/The Conversation NY-BD-CC, CC BY-SA


Richard Matthews, University of Adelaide

My lounge room is bugged. My phone is broadcasting an ultrasonic signal to my blu-ray player via an acoustic side channel beyond human hearing.

The channel networks the two devices, similar to how a dial-up connection used to get our computers online before the days of the NBN. The same technology is behind Google’s Nearby API through their Eddystone protocol, and is the basis of products sold by the startup Lisnr. It’s also the reason more and more apps are requesting access permissions to your microphone.

Read more:
Can sound be used as a weapon? 4 questions answered

Aside from networking, companies use ultrasonic signals (or beacons) to gather information about users. That could include monitoring television viewing and web browsing habits, tracking users across multiple devices, or determining a shopper’s precise location within a store.

They use this information to send alerts that are relevant to your surroundings – such as a welcome message when you enter a museum or letting you know about a sale when you pass by a particular store.

But since this technology records sound – even if temporarily – it could constitute a breach of privacy. An analysis of various Australian regulations covering listening devices and surveillance reveals a legal grey area in relation to ultrasonic beacons.

How does ultrasonic data transfer work?

Google Nearby enables Android phone users who are in close proximity to each other to connect their devices and share data, such as documents or media. Google says:

To share and collaborate in apps, Nearby uses Bluetooth, Wi-Fi, and inaudible sound to detect devices around your device. (Some people can hear a short buzz.)

These inaudible sounds are ultrasonic beacons transmitting data that is then picked up by your phone.

To demonstrate this technology, I recorded such a beacon being broadcast in my lounge room while watching Netflix. In the below image you can see the audio ends around the 15kHz mark with the ultrasonic beacon beginning at 20kHz, the point at which average human hearing ends.


Audio capture demonstrating the different frequencies over a 71 second period while watching Netflix. The ultrasonic beacon is apparent in the right hand side of the waterfall diagram.



Since these ultrasonic sounds are the only relevant section of the data signal, it is necessary to remove the lower frequency audible signals (such as speech) that are also captured. This is done by using a high-pass filter. A high-pass filter extracts high frequencies to remain in the data and eliminates the lower frequencies.

This means, in theory, that while the device could be recording sound, it isn’t keeping the parts of the recording that might include conversation.

Different filters process signals in different ways. While filters constructed from basic electrical components do not require any storage of the signal, digital software filters require the signal to be stored temporarily.

Is this kind of recording legal?

In South Australia, where I am based, a listening device is precisely defined as:

a device capable of being used to listen to or record a private conversation or words spoken to or by any person in private conversation (…) but does not include a device being used to assist a person with impaired hearing to hear sounds ordinarily audible to the human ear.

There is no exemption provided for recording sounds and then removing the audible portion.

It is generally unlawful “to overhear, record, monitor or listen to a private conversation” unless you have the express permission of all parties involved. Since audio is being recorded using a standard microphone in the course of an ultrasonic data transfer, the full audio spectrum – including any conversation occurring – is being sampled at the same time.

Read more:
Your mobile phone can give away your location, even if you tell it not to

The type of filter used is therefore critical. If a digital filter is being used to extract the ultrasonic data, the temporary storage of the full audio spectrum could be considered a recording. And that requires consent.

Google gives users the chance to opt-out the first time notifications are made using the Nearby service. However, this could only be construed as consent for the phone owner, not all parties to a possible conversation being recorded in private. Also, by the time the notification happens, the recording has already occurred.


Google’s FAQ explaining the opt-out process for the Nearby API.



What about location tracking?

Advertisers can use ultrasonic signals that speak to your mobile phone to establish where you are within a store. They can also correlate this data with other advertising metadata easily obtained from cookies to track your broader movements.

This further complicates matters regarding their legality.

In South Australia, a tracking device is explicitly defined as:

a device capable of being used to determine the geographical location of a person, vehicle or thing and any associated equipment.

Since it is generally illegal to track someone without their consent – implied or otherwise – if an advertiser is using an app combined with an ultrasonic beacon to track you and you are unaware that they are doing so, they could be breaking the law.

Google says the Nearby protocol is battery-intensive due to the use of Bluetooth and wifi. As such “the user must provide consent for Nearby to utilise the required device resources”. It says nothing about the legality of needing permission to record sound or track users.

Google does warn that the Nearby service is a one-way communication channel with your phone never communicating directly to a Nearby service on its online support page.

But since users are required to opt-out of the service, it’s hard to argue that they have given informed consent.


Google explains that the Nearby devices do not connect directly as Lisnr technology does, however, nothing is specified about what happens to data from your phone to Google or other third-party servers.



What can I to protect my privacy?

Users need to be aware of the potential to be tracked from ultrasonic beacons such as Google’s Nearby service.

Since this is a built-in feature of Google’s Pixel phone and other Android phones, users need to have informed consent regarding the Nearby service and the dangers of revealing data about themselves. Merely blocking app permissions which request to use your phone’s microphone will not be enough.

Read more:
7 in 10 smartphone apps share your data with third-party services

One research group has released a patch that proposes to modify the permission request on phones requiring apps to state when they want access to your microphone to track inaudible signals individually. This doesn’t solve the built-in problem of Google’s API though.

Google and other mobile phone companies should do more to ensure they are adequately gaining informed consent from users to ensure they do not fall foul of the law.

The ConversationThanks to reader feedback we’ve updated this article at the author’s request to remove references to Apple’s iBeacon, which does not use an acoustic side channel for data transfer.

Richard Matthews, PhD Candidate, University of Adelaide

This article was originally published on The Conversation. Read the original article.

Posted in Uncategorized | Leave a comment

DNA facial prediction could make protecting your privacy more difficult

DNA facial prediction could make protecting your privacy more difficult


File 20180416 540 1s9cc4l.jpg?ixlib=rb 1.1

The science of DNA facial reconstruction is advancing rapidly.
Composite from Parabon and PNAS


Caitlin Curtis, The University of Queensland and James Hereward, The University of Queensland

Technologies for amplifying, sequencing and matching DNA have created new opportunities in genomic science. In this series When DNA Talks we look at the ethical and social implications.

Everywhere we go we leave behind bits of DNA.

We can already use this DNA to predict some traits, such as eye, skin and hair colour. Soon it may be possible to accurately reconstruct your whole face from these traces.

This is the world of “DNA phenotyping” – reconstructing physical features from genetic data. Research studies and companies like 23andMe sometimes share genetic data that has been “anonymised” by removing names. But can we ensure its privacy if we can predict the face of its owner?

Here’s where the science is now, and where it could go in the future.

Read more:
Is your genome really your own? The public and forensic value of DNA

Predicting hair, eye and skin colour

DNA phenotyping has been an active area of research by academics for several years now. Forensic biology researchers Manfred Kayser and Susan Walsh, among others, have pioneered several DNA phenotyping methods for forensics.

In 2010, they developed the IrisPlex system, which uses six DNA markers to determine whether someone has blue or brown eyes. In 2012, additional markers were included to predict hair colour. Last year the group added skin colour. These tests have been made available via a website and anyone who has access to their genetic data can try it out.

Trait predictions are being used to address a number of questions. Recently, for example, they were used to suggest that the “Cheddar Man” (the UK’s oldest complete human skeleton) may have had dark or dark to black skin and blue/green eyes. The predictive models are mostly built on modern European populations, so caution may be required when applying the tests to other (especially ancient) populations.

The full picture

Research on DNA phenotyping has advanced rapidly in the last year with the application of machine learning approaches, but the extent of our current capabilities is still hotly debated.

Last year, researchers from American geneticist Craig Venter’s company Human Longevity, made detailed measurements of the physical attributes of around 1,000 people. Whole genomes (our complete genetic code) were sequenced and the data combined to make models that predict 3D facial structure, voice, biological age, height, weight, body mass index, eye colour and skin colour.

Read more:
How cops used a public genealogy database in the Golden State Killer case

The study received strong backlash from a number of prominent scientists, including Yaniv Erlich, aka the “genome hacker”. The study seemed to predict average faces based on sex and ancestry, rather than specific faces of individuals. The method of judging the predictions on small ethnically mixed cohorts was also criticised.

Even with accurate facial predictions, Erlich noted that for this approach to identify someone in the real world:

an adversary … would have to create [a] population scale database that includes height, face morphology, digital voice signatures and demographic data of every person they want to identify.

Because without a detailed biometric database you can’t get from the physical predictions to a name.

A database to match?

It turns out that the Australian government is in the process of building such a database. “The Capability” is a proposed biometric and facial recognition system that will match CCTV footage to information from passports and driving licences. Initially billed as a counter-terrorism measure, there are already reports the service may be provided for a fee to corporations.

At the same time, the Australian Tax Office has just initiated a voice recognition service. It’s easy to imagine how this kind of system could be integrated with “The Capability”.

And it’s not only Australia establishing the capability to become a biometric, face-recognising surveillance state. India is deploying the Aadhar system, and China leads the world in facial recognition.


The Australian Government is building a facial recognition system called The Capability that will match CCTV footage to information from passports and driving licences.
Queensland Government


DNA mugshots

At present, most forensic DNA profiling techniques rely on “anonymous” markers that match identity to a database, but reveal little else about a suspect. With advances in genomic technology, forensic genetics is moving toward tests that can tell us much more about someone.

There are a number of companies that offer DNA phenotyping services for a fee. One company, Parabon NanoLabs, claims to be able to accurately predict the physical appearance of an unknown person from DNA. Police forces already use their services, including the Queensland police in a recent case of a serial rapist on the Gold Coast.

The Parabon system is also based on a predictive model. This was developed by applying machine learning tools to their genetic/trait reference database. The company predicts skin colour, eye colour, hair colour, freckles, ancestry, and face shape from a DNA sample. These predictions, the confidence around them, and a reconstruction made by a forensic artist are used to make a “Snapshot” profile.

Read more:
New cryptocurrencies could let you control and sell access to your DNA data

There is scepticism about the capabilities of Parabon. It is difficult to assess Parabon’s system because the computer code is not open, and the methodology has not been published with peer-review scrutiny.

As with any type of DNA evidence, there is a risk of miscarriages of justice, especially if the evidence is used in isolation. The utility of DNA phenotyping at this point may be more in its exclusionary power than its predictive power. Parabon does state that Snapshot predictions are intended to be used in conjunction with other investigative information to narrow the list of possible suspects.

Where will this all end up?

We only need to look at identical twins to see how much of our face is in our DNA. The question is how many of the connections between DNA and our physical features will we be able to unlock in the future, and how long will it take us to get there?

Some features are relatively easy to predict. For instance, eye colour can be inferred from relatively few genetic variants. Other traits will be more complicated because they are “polygenic”, meaning that many gene variants work together to produce the feature.

A recent study of hair colour genetics, for example, examined 300,000 people with European ancestry. They found 110 new genetic markers linked to hair colour, but the prediction of some colours (black or red) is more reliable than others (blonde and brown).


Twins can show us how much of our face is in our DNA.


The way that DNA codes our physical features might be different in people from different ancestral groups. Currently, our ability to predict modern Europeans will be better than other groups – because our genetic databases are dominated by subjects with European ancestry.

As we employ increasingly sophisticated machine learning approaches on bigger (and more ethnically representative) databases, our ability to predict appearance from DNA is likely to improve dramatically.

Parabon’s services come with a disclaimer that the reconstructions should not be used with facial recognition systems. The integration of these technologies is not impossible in the future, however, and raises questions about scope creep.

What does this mean for genetic privacy?

Despite the controversy around what we can do now, the science of DNA phenotyping is only going to get better.

What the rapidly developing field of DNA phenotyping shows us is how much personal information is in our genetic data. If you can reconstruct a mugshot from genetic data, then removing the owner’s name won’t prevent re-identification.

Protecting the privacy of our genetic data in the future may mean that we have to come up with innovative ways of masking it – for example genome cloaking, genome spiking, or encryption and blockchain-based platforms.

The ConversationThe more we understand about our genetic code the more difficult it will become to protect the privacy of our genetic data.

Caitlin Curtis, Research fellow, Centre for Policy Futures (Genomics), The University of Queensland and James Hereward, Research fellow, The University of Queensland

This article was originally published on The Conversation. Read the original article.

Posted in Uncategorized | Leave a comment

Can your boss sue you for fighting for proper wages?

Explainer: Can your boss sue you for fighting for proper wages?

CHARLIE LEWIS / Wednesday, May 2, 2018

cafe public holiday

Melbourne cafe made headlines last week for alleged underpayment of staff and threats to sue. It may be common in hospitality, but what laws are at play here?

Last week a Melbourne cafe, Barry, joined the long long list of employers under fire for accusations of underpayment.

To recap: it was revealed that employees at Barry were underpaid by around $5/hour and didn’t receive any penalty rates. A picture of the, shall we say, spartan contract circulated which detailed the agreement:

barry cafe

The employees who came forward now claim they have had their shifts indefinitely cut, and after protests followed, the cafe owners have reportedly threatened to sue their workers for harassment. The circumstances are not unique — as a glance at the Fair Work Ombudmsan media releases will tell you — but for workers, Barry is a particularly useful box-checking exercise, for figuring out your rights at work.

Can I sign away my minimum wage?

Nope. Under the Fair Work Act any agreement to conditions below those set in the National Employment Standards or the applicable award has no effect.

A workforce (usually with the help of a union) can negotiate a collective agreement that varies those conditions but it must be assessed and registered with the Fair Work Commission, who assess it against the relevant conditions and makes sure the affected employees are not worse off. Although, the last few years has shown this system to be far from foolproof.

Can I get paid in food?

Nope. You can’t be paid “in-kind” — not in goods, services, lodgings, food, nor fawning or flattery. It has to be those real-life dollarydoos, that hardcore scratch, that real-time moolah.

Can I get fired, or lose shifts if I ark up about my pay?

Nope! The Act sets out certain protected workplace rights and you can’t be disadvantaged if you exercise them. They are wide and varied (taking in freedom of association, freedom from discrimination) but they include “the capacity under a workplace law to make a complaint or inquiry”.

Thus Barry staff have the right to enforce their minimum conditions without suffering what the Act calls adverse action — for example losing shifts, or being fired, or being coerced with the threat of legal action. Like a lot of employment law, it’s great in theory, but hard to prove; how can you illustrate what was in your employer’s heart when they cut back your shifts?

But can they sue you?

Harrassment, the word used in Barry’s email to their staff, is like bullying — there is a colloquial understanding that doesn’t necessary reach the legal definition. Whether this is a clumsy misuse or not, as it happens there is currently no Commonwealth harassment act, nor an established civil cause of action for harassment established by the courts. That a harasser has trespassed, or committed assault or nuisance has to be established.

The email says the “harassment” is hurting and devaluing the business — however as a company, Barry cannot sue for defamation.

Original article found HERE at

Posted in Uncategorized | Leave a comment