The challenges of tackling cybercrime and online harm

Cybercrime
(File photo: Jordan News)
On May 21 the government spokesman and Minister of Communication, Faisal Al-Shboul announced that Jordan will introduce another update on its cybercrime law. اضافة اعلان

While the revised law is not yet public, we can at the previous record of the law as well as how the government defines cybercrime.

Cybercrime is not exclusive just to the Kingdom, the majority of countries are dealing with issues such as misinformation, disinformation, digital fraud, online child predation, online hacking and theft, impersonation, and libel.

Additionally, with the emergence of AI we may be looking at falsified images, audio, and stories that can be used for mass confusion or individual blackmail.

The Kingdom is a cork floating on the stormy wave of these global challenges.

A small fish in a big tech pond
As a small state Jordan has less leverage and influence against big tech companies or global online threats. 

How to define a cybercrime is less a technical issue than a political one.

This involves definitions of hate speech and online harm, deciding if libel should be a criminal issue, and the definition of freedom of speech.

The current cybercrime law in Jordan has met with great controversy. Political activists say the definition is too broad and can be used for limiting partisan activity, journalists believe it curtails freedom of media and adds unwritten red lines, thus increasing self-censorship.
Cybercrime is not exclusive just to the Kingdom, the majority of countries are dealing with issues such as misinformation, disinformation, digital fraud, online child predation, online hacking and theft, impersonation, and libel.
Finally, freedom of speech activists have stated it unfairly puts responsibility on users rather than platforms or providers.

There is no one solution because there is no one definition. Approaches to fraud, for instance, are standard because this is a known crime with a generally agreed upon definition. But online harm? Hate speech? Speech against the state?

Definitions differ
It will be impossible to get standardized definitions because tolerance for freedom of speech and definitions of hate speech differ. The definitions can differ from state to state but they should not be vague - that would be a disservice against the people.

For Jordan, a clear distinction must be made between their approach to universal and well-defined cybercrimes and social media regulations according to well defined and clear definitions of any matter that pertains to it, such as hate speech, libel, harassment, and bullying.

Once these definitions are set, Jordan can start creating regulations for social media. For example, Germany and Brazil put the solution in the hands of the judiciary. The US is large enough to hold big tech responsible. Turkey requires social media providers to appoint a social media representative to handle and manage any complaints and violations brought up by the authorities. 

Instead of my usual ‘Things You Should Know,’ I asked several questions to Katie Harbath who spent 10 years at Facebook as the Director of Public Policy, where she built and led global teams that managed elections and helped government and political figures use the social network to connect with their constituents. She's now the head of Anchor Change and is a global leader at the intersection of elections, democracy, and technology.

Q: Several countries use the term cybercrime, but in your definition, what is a cybercrime?

A: To me a cybercrime is using digital means in which to break the law.

Q: What's the difference between a cybercrime and inflammatory online rhetoric?

A: Cybercrime to me means that someone has done something against the law. Depending on the country you are in and the laws they have inflammatory online rhetoric - while terrible - is likely not against the law.

Q:  In your experience, what are some best practices from around the world in addressing cybercrime on social media platforms?

A: I'll be honest I'm less familiar with actual cybercrime. In terms of inflammatory online rhetoric some things that are showing promise are:

Improved AI tools to detect problematic content.

— Getting counter-narratives out to pre-bunk potential narratives that people might see around events such as the Russian invasion of Ukraine or ahead of elections.

 Transparency and oversight into what are happening on platforms.

Q: Small states like Jordan are not large markets to social media platforms. How can smaller countries successfully get the attention of social media platforms?

A: To be honest this is going to be even harder now that so many companies are either going through layoffs or are already incredibly small. One thing smaller country might think about doing is banding together and/or working with larger nations or the United Nations to help get their concerns communicated.

Q: In your experience, do social media platforms have both the linguistic context and the cultural context to address the concerns outside of the US and Europe?

A: It depends by platform. Your legacy platforms like Facebook and Google likely do have people with this context. Now, the question is how willing they are to do things on a country-by-country basis versus global. The recent Oversight Board decision on Meta's COVID misinformation policies discusses how the company generally wants to keep things on a global level. This can make having nuanced decisions harder.Smaller platforms are much less likely to have any of this context and also like to take a global approach to content moderation and other issues. 
The Kingdom is a cork floating on the stormy wave of these global challenges.
Q: A citizen goes to the Facebook page of a local television channel and uses racist speech. Looking at best practices globally, would it be the user, the television channel that moderates the page, or Facebook that should be held accountable (or should the rhetoric be allowed)?

A: This honestly is an issue up for debate right now. In the United States there's a lot of discussion about changing Section 230 which gives platforms immunity. The Supreme Court just ruled in favor of the platforms in two cases that could have changed this. The Supreme Court now will be taking up two cases looking at if government officials can ban and/or delete content from their social media accounts. In Europe a new landmark decision by the European Court of Human Rights (ECHR) says freedom of expression does not immunize public officials from criminal liability if they fail to promptly remove manifestly illegal content (such as “hate speech”) posted on their accounts by followers. Other countries have different rules. I frankly think there's a role for all three - user, page owner and platform - to play and be held accountable for the rhetoric. 

Q: If you were to advise smaller states outside of the US and Europe, such as Jordan, on how to work on big tech platforms. What would your counsel be? 

A: I'd say three main things. First, develop pragmatic suggestions that they can actually implement; second, develop incentives for them to take on this work; and third, work together with other countries which will make it much more likely that they might listen.

Here is my take
The new law is not yet public, and multiple countries are still figuring out how to catch up to where social media and technology are taking the issue.

However, based on global experience a few things are clear. First, the definition of a cybercrime must be absolutely clear and understandable.

Beyond that, parameters for other offenses like hate speech, criminal libel, child predation, and fraud must also be unequivocally delineated. Individual users cannot be the only ones held responsible - platforms and providers also carry a burden of responsibility.

Whether through negligence, calculated permission, or even exploitation of a post’s sensationalism to boost engagement, they should not be able to evade culpability.

Finally, Jordan is a small state and does not represent a big market concern for big tech providers, as has been proven by the Facebook Files leaks.

The only way countries like Jordan will be able to protect themselves is by banding together with other states as Harbath recommended to jointly deal with platforms and providers. Recently Jordan put forward a social media regulation draft to the Arab League in an attempt to build regional solidarity towards social media providers.

States like Jordan have the responsibility to follow up with big tech companies since often they don’t have the local cultural context to make informed judgments themselves.

Ideally, such a clear and specific cybercrime law will also have an education component.

From an early age, children and their parents and teachers need to learn safe practices of online behaviors to avoid online harm, threats, and predation.

After all, this is the security of our citizens. We don't need to teach our kids buzzwords or to be tech savvy, we need to teach them the fundamentals of online personal data privacy.

It is difficult to raise awareness of online harm or safeguard against it if you don't know or understand how your personal data is collected and used. 

We should not only make citizens aware of how to protect themselves but also how to protect others.

There's a dangerous trend of thinking that we should be polite and tolerant in society but then we can go online and freely air all of our prejudices, grievances, and annoyances.

Our social media space is a reflection of our society. Cyber bullying of women politicians, intolerance, divisive speech all are signs of an alarming Dr. Jekyll and Mr. Hyde approach to our society. (Remember that Dr. Hyde and his dark habits take over completely by the end of the story.)


Katrina Sammour was first published on Full Spectrum Jordan, a weekly newsletter on SubStack. 


Read more Opinion and Analysis
Jordan News