Building safer Internet practices to protect your business and employees

 

The Government’s recently released and broadly welcomed proposed legislation on the use of the Internet and social media will be disturbing news for the giant companies that run the vast systems that underpin our online communications. The new laws propose that Facebook, Twitter and other platforms will be required to take responsibility for the well-being and safety of their users and to protect them from damaging and dangerous content.

 

The issue has risen to the surface with a number of notable cases where individuals have been exposed, for example, to self-harm videos, leading those users to follow the bidding of the information and, in extreme cases, to take their own lives. However, these laws go further than this. They seek to limit the availability of extremist propaganda, exhibitions of violence from radical groups and the dissemination of material that will radicalise sensitive young minds.

 

Self-regulation has not worked and so Parliament has decided to take matters into its own hands with this legislation. In bringing this to people’s attention, it is hoped that individuals will also take responsibility for their own safety online and will take steps to protect others who are more vulnerable. In the workplace, where Internet use is high and where the web is the medium for engagement with followers and customers, it’s also important for companies to have clear policies on the use of these tools and to protect themselves and their employees from harm.

 

For businesses large and small, there is a need for secure Internet policies and practices not only to protect the company against attack and abuse but, more importantly, to ensure the safety of employees. It’s important that companies do not unwittingly propagate fake news or become embroiled in situations that may affect their business adversely. We look at a few points below that are relevant to most people.

 

The proposed new laws

 

First of all, it’s important to understand what the Government is aiming to achieve.

 

“The new proposed laws will apply to any company that allows users to share or discover user generated content or interact with each other online. This means a wide range of companies of all sizes are in scope, including social media platforms, file hosting sites, public discussion forums, messaging services, and search engines.”

(Press release UK to introduce world first online safety laws – Department for Digital, Culture, Media & Sport)

 

Essentially, if you run any sort of social platform, large or small, there will be constraints put upon you to monitor content and to protect individuals from the negative effects of anything that could be considered to fall under that “duty of care”. The gov.uk press release makes it clear what will be covered:

 

    • A new statutory ‘duty of care’ to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services.
    • Further stringent requirements on tech companies to ensure child abuse and terrorist content is not disseminated online.
    • Giving a regulator the power to force social media platforms and others to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address this.
    • Making companies respond to users’ complaints, and act to address them quickly.
    • Codes of practice, issued by the regulator, which could include measures such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.
    • A new “Safety by Design” framework to help companies incorporate online safety features in new apps and platforms from the start.
    • A media literacy strategy to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, including catfishing, grooming and extremism.

(ibid.)

 

For most SMEs, the content of this proposed legislation will be relatively lacking in impact on their day-to-day activity. However, the spirit of the law and indeed its intentions carry a model of understanding that company owners would do well to follow.

 

Safer Internet

 

This generic term is usually applied when considering the safety online of young people and children. Although this is a major priority for Government and indeed should be for us all, it’s also important to consider how the people we care for in our workplaces may be affected by unsafe practices, particularly where they are exposed to Internet systems during their time at work.

 

Under this title, employers should consider how content that is being created by the company is deployed, its readership and the likelihood of its misinterpretation or misuse in the wrong hands. For example, highly detailed surgical diagrams and photography, weapons systems, trauma management (accidents and emergencies) and other such graphic material my affect young minds if these images are freely accessible in a public space. Placing them behind a pay-wall or login system with verification is much better practice.

 

Jokes and memes that are circulated are usually generally harmless, yet it is quite often the case that their content can be offensive, particularly in terms of race, religion, sexual orientation, disability or a wide variety of other choices or conditions that are relevant. The areas of activity that should be considered and, indeed are covered in the proposed legislation, include all forms of sexual abuse and exploitation, terrorist content and incitement, criminal activity, pornography including revenge porn and sexting of indecent images by under-18s, harassment and hate crimes, suicide and self-harm incitement, violence, illegal goods, weapons and drugs and interference with the due process of law.

 

Cyber bullying

 

Just as bullying in the workplace should attract punitive measures, so too should online bullying. This may take the form of shaming, teasing, impersonation, outing, stalking, exclusion or anything else that the bully finds as a vulnerability in the target of their attentions. Employers should be aware of the possibility of such activity and treat it very seriously when it is brought to their attention. A good overview of this topic may be found here on the BullyingUK site.

 

Disinformation

 

Although companies are unlikely to be the source of disinformation, their employees may be the unwitting servants of it. Fake news has been at the forefront of the Brexit campaign and of Donald Trump’s presidency. We have a tendency to believe what we see online or in the media, particularly if it fits our own personal narrative. Yet this is a dangerous approach to information. Everyone should be encouraged to question what they see, particularly if it is provocative or likely to defame someone in the public eye. So-called “bots” amplify and spread fake news, pushing erroneous and malicious content into the timelines of unwitting users who then propagate it through Likes and Shares to their own communities. Many myths have been given artificial oxygen in this way, to the detriment of informed discussion and debate.

 

A useful Government resource aims to help people understand how to identify Fake News and deal with it. Don’t feed the beast can be circulated freely to help people be discriminating about their choice of content.

 

How businesses should respond in a responsible way

 

It’s essential that any company with more than a couple of employees should have at least the bare bones of a safe Internet policy that also embraces social media use, appropriate use of company computer equipment and policies relating to respect in the workplace.

 

Such a policy can form part of an Employee Handbook and it should include controls on who posts what on social media. Guidelines for content, tone and the company “voice” online can also usefully be held here as a reference point for all new joiners. Inevitably, staff check their phones or their social platforms, often during work time. How acceptable this is to you is a matter for each company and its management. Firm guidelines on what can be accepted should be laid down, even including for instance, a guest network for personal device traffic in the workplace. This can then be limited by time or by throughput to minimise the impact on the main company network.

 

By the same token, it’s possible to control access to certain sites that may not be suitable for consumption during the working day. Monitoring traffic is not illegal, but it is advisable to inform staff that the company does this in order to encourage people not to contravene online policies.

 

It’s vital that the company actively responds to examples of inappropriate online behaviour such as cyber-bullying as part of HR activity, in exactly the same way that it would to similar behaviour offline.

 

ExtraMile’s experience

 

As a digital marketing company, ExtraMile majors in online work and its employees spend the majority of their time engaged through online systems, both internal and client-based. Strong policies for computer and Internet use ensure that everyone understands his or her responsibilities to others and to the company.

 

ExtraMile directors take their duty of care to staff very seriously and that care also extends into the wider community. Currently, the company’s MD, Gabrielle Hadley, is having an online birthday amnesty to raise funds for MIND. (You can see her fundraising page here.)

 

Online safety is not just about protecting employees – it’s also about protecting the company and its associates. Good online practice and clear policies will ensure that no one propagates inappropriate material or posts offensive content without consideration of its impact.

Love this post? Rate it!
[Total: 0 Average: 0]