Cybersecurity in Events: What to Know in 2026 and Beyond

December 1, 2025

Home - Blog - Cybersecurity in Events: What to Know in 2026 and Beyond

From data leaks and fake events to costly cases of fraud, threats over the internet have been a danger to events since its inception. However, contrary to the promises of AI developers, new developments from AI to quantum computing have largely undermined the typical cybersecurity playbook and, in some cases, presented all new risks.

What are the risks to an event, and what does an event team need in place to safeguard attendees, data and events from bad actors?

The Risks

Deepfakes, Fake Events and Fraud

While many of us think of cybersecurity threats as being highly technical, the most common scams involve exploiting human mistakes to gain otherwise prohibited access to systems, data or funds.

Imposter scams involve a bad actor who solicits payments under false pretenses by pretending to be a legitimate recipient. After investment scams, imposter scams were the most costly type of fraud with almost $3 billion reported lost in 2024. Losses from scams specifically involving business and job opportunities totaled $750.6 million USD in 2024, which is a $250 million increase compared to 2023.

Case in point. This is what happened to Liz Lathan, who paid $40,000 USD to an imposter through an erroneous email designed to look exactly like her venue’s. Fortunately, while Lathan was unable to recover the money, she and her team were ultimately able to pay the venue and move forward with their event.

Case in point. Fraudsters can also impersonate clients. When “Gregory Mount” of Glidden Paints reached out to Nirjary Desai with a launch party and a list of preferred vendors, she requested invoices, tax documents and phone conversations to validate them. Once provided, she was told to pay the vendors up front and that she would be reimbursed. While it initially appeared as though she had been, the payment was made using a stolen credit card and was later invalidated. 

To make matters worse, Desai provided her banking information to enroll in Glidden’s central payment system, which the scam artists used to forge an invoice and withdraw an additional $10,000 from her account, leaving her out almost $20,000 in total.

Per Skift’s reporting, “79% of organizations experienced attempted or actual payments fraud” in 2024 and “the most common method is business email compromise — often through spoofed emails and fake vendor scams like this one.”

According to IBM cybersecurity expert Jeff Crume, artificial intelligence (AI) is only making scams like these easier to execute. Using generative AI, scam artists can now fabricate more convincing and personalized phishing emails, and constantly improving deepfakes are already being used to scam organizations out of millions of dollars. Per Crume’s video, an attacker using a deepfake was able to emulate the CFO of a major retailer and convince an employee to wire $25 million to the scammer’s account. More recently, a deepfake was famously used to undermine the democratic party in the New Hampshire primary by fabricating a robocall of Joe Biden telling people not to bother voting.

Data Leaks and Breaches

Event industry data leaks occur when attendee, staff or operational data is nefariously accessed and exposed – often with dire consequences including reputational damage, financial penalties, non-compliance with data protection laws such as the GDPR and other legal liabilities.

Within the event industry, attacks like these have impacted almost every link in the supply chain from world renowned venues to third-party ticketing platforms and even ubiquitous professional system providers like Microsoft

Case in point. Stadium management company Legends International, LLC suffered a data breach on November 9, 2024 that compromised the data of both venue staff and visitors. According to a report submitted to the Texas Attorney General’s Office in April of this year (2025), the leak potentially exposed Social Security numbers, financial account information, government-issued ID numbers and medical information. Despite immediately taking measures to mitigate the damages, conducting an investigation and notifying those whose data might have been leaked, Legends was looking at a potential class action lawsuit. Though it appears the primary plaintiffs representing Legends’ staff ultimately dropped the charges, the case shows that the victims of these crimes may themselves be liable if their risk management is seen as lacking.

However, the threshold for due diligence may be moving as AI opens new avenues for those who want to hack into a business’s systems. Crume warns that AI can already be used to write malware that can break into existing systems and tools, which means that bad actors don’t even need to be tech experts. A recent study supported by IBM found that, within a controlled environment, GPT-4 was able to autonomously exploit real-world systems 87% of the time when given an adequate description of a zero-day vulnerability (i.e. a bug or flaw in the code that the vendor was unaware of). This has already happened in the real world, according to Crume, who cites a major online retailer that attributes a reported 7x increase in the number of cyber attacks to AI.

Quantum Computing

As with instances of fraud, exploiting human errors is currently one of the most common methods of gaining access to sensitive data. There is an adage in cybersecurity circles: Hackers don’t hack in, they log in.

This is because passwords are heavily encrypted during storage and transmission, and cracking that encryption by brute force (e.g. by trying every possible character combination or solution until they get the right one) could take conventional computers literal centuries – even using AI.

However, Crume notes that quantum computing has the potential to render current encryption standards obsolete in the next five to ten years. According to Specops, this is because traditional cryptographic systems bank on the processing speed of conventional computers, which is limited by the binary way (1s and 0s) they represent data. Quantum computers, on the other hand, use an ability called “superposition” to represent data in multiple states at the same time. What might take a conventional computer 1000s of years to process might only take a quantum computer minutes.

So if the technology isn’t there yet, why is this a risk for 2026?

Because data breaches are still happening and bad actors can “harvest now, decrypt later,” as Crume puts it. Protecting data is more important than ever because the full impact of breaches happening now may only become evident years down the line, when hackers have access to much more sophisticated methods of breaking current encryptions.

To prepare for that, Crume recommends using multifactor authentication along with more secure passkeys, which are based on a “challenge-response” system in which users are asked to do something in the moment that gives them access, like scan their face or fingerprints – although Specops indicates that these will also eventually have to be reinforced when quantum computing becomes more available.

Unverified AI Implementations and Increased Attack Surface

Artificial intelligence doesn’t only present a risk in the hands of bad actors, it increases the risk when we use it as well. The number of potential entry points (a.k.a. the “attack surface”) increases whenever a new piece of tech is added – and that now includes AI. For Crume, this manifests in two ways.

The first is what he calls “shadow AI” wherein AI is being used on the down low for purposes beyond what has been approved or vetted within an organization. As many rush to hastily implement AI tools and services with no real security vetting process, the rapid adoption is simply outpacing the ability to responsibly understand and manage risk. As people at all strata of an organization subscribe to the “move fast and break things” mantra, it will become increasingly difficult for companies to control which AI applications are being used. 

On the contrary, there is mounting pressure within the broader job market for inexperienced users to “vibe code” AI automations and agents themselves with no real understanding of the vulnerabilities they might be introducing. This risk is compounded by the placement of AI in all mobile devices (which were already vulnerable to effectively becoming hackable surveillance systems).

The second is through “prompt injection,” which is a process of training or socially engineering AI to misbehave. Bad actors are constantly testing limits in order to break the guardrails, and according to the Open Worldwide Application Security Project (OWASP), it’s the number one attack on LLMs that power the most popular generative AI services, tools and plug-ins professionals are using.

The Measures

Training and Vigilance

Human vulnerability remains the first thing bad actors will try to exploit, so training staff to recognize and deal with scams should be an organization’s first line of defence. Speaking with TSNN, Maritz VP of Product Management Aaron Dorsey stressed the importance of “making security a part of your [business] culture” with cyber security training at all levels. This also applies to AI, noted Maritz CTO John Wahle, who believes that embracing AI from the top down provides the guidance necessary to implement it safely.

In terms of where to source that training, there are many courses for small businesses and professionals on platforms like Coursera and Udemy – often created in partnership with well-established organizations like IBM and Google. The CISA Learning platform also offers a wide range of courses on cybersecurity and in various formats, many of which are on demand.

Once staff is aware of the dangers, a little vigilance can go a long way in identifying common scams (phishing over email and text messages, malware, and risky end-user behavior). While incorrect spelling and grammar may no longer give away a phishing scheme, there are a few things that should be flagged. These include email addresses that look incorrect or do not match a company’s web domain, a generic greeting that refers to the recipient as “user” or “customer,” calls to action to log into or download something and a sense of urgency.

However, as deepfakes improve, additional steps may be warranted in verifying a contact’s identity – especially where large sums are involved. “From now on, large payments always come with a phone call,” said Lathan, who has made it a policy that anyone processing payments at Club Ichi complete fraud awareness training. However, AI has already been used to swindle people by deepfaked voices, so we may need to devise new ways of validating that we’re speaking to the right person. One strategy might be to insist on looking up the organization’s phone number on the official website, calling them directly and asking to be transferred to the person in question.

Live Events

Of course, when it comes to the risk of imposters, live events remain a bastion of trust as vendor relationships are built face to face and deals can be struck in person. As increasingly competent deepfakes place more of a burden on legitimate partners, we can expect more business to start on the trade show floor.

However, to ensure event data remains secure, it’s critical to use a reputable event tech platform. Stova’s entire ecosystem, from the point of registration to the post-event analytics, is compliant with the most rigorous data security standards. Our lead capture technology provides exhibitors and attendees a secure, seamless way to exchange contact information, and our mobile event app and virtual event platform offer a secure closed-access venue for ongoing conversations.

Cyber Security Certifications and Data Management Best Practices

According to Skift Meetings’ Event Tech Almanac 2025, while all event management platforms they reviewed complied with GDPR, user roles and permissions and ISO27001 certification, a quarter failed to offer data encryption and SOC2 compliance. 

Stova is committed to maintaining the highest standards of security, which is critical as these standards evolve. The National Institute of Standards and Technology (NIST) is already working to develop standards for post-quantum cryptographic algorithms, and they offer recommendations and education for organizations preparing for post-quantum cybersecurity. 

Another way to protect attendees and other stakeholders from data breaches is simply avoid housing data unnecessarily. Writing for Conference News, Adrian Pragnell recommends collecting “only the data that is necessary for the event’s purpose(s)” and automatically deleting any data you no longer need. Not only would this reduce the fallout of any potential breach, it also supports compliance with GDPR and emerging US state laws that limit the use and retention of data. 

However, your ability to do this is largely dependent on your event tech stack – another reason to be cautious when selecting event tech partners. Per Skift Meetings’ Event Tech Almanac 2025, “[while] most event tech vendors act as data processors, giving clients full ownership and control of attendee data… some vendors use a shared data ownership model, requiring attendees to become platform users.” The question of whether tech platforms should be able to retain an event’s attendee data once the event has ended has fuelled a years-long debate in the industry. 

Stova’s position is that event organizers should retain full control over their event data, which will only remain on the platform as long as it serves them.

Pragnell also recommends that policies be reinforced with comprehensive audits and an incident response plan to mitigate damages and losses for affected individuals. For example, Legends International offered all those affected by its breach two years of complimentary identity protection through Experian IdentityWorks. Conducting “tabletop exercises” that simulate breaches can also help to ensure everyone is prepared in the event of an attack.

Conclusion

Cybersecurity in events is no longer just about firewalls and anti-virus software – it’s about anticipating risks that are evolving alongside the technologies we use every day. From AI-powered phishing emails to the looming potential of quantum computing, the stakes are high and on the rise for organizers who handle sensitive personal and organizational data.

What this means in practice is twofold. First, vigilance and training are non-negotiable. Human error remains the most common point of entry, and even the most sophisticated systems can be undone by a bit of misplaced trust. Building a culture of security awareness at every level of your organization is the strongest defense against scams and breaches.

Second, your choice of technology partners matters more than ever. Platforms that prioritize compliance, transparency, and data ownership not only reduce your exposure to cyber threats but also give you peace of mind that attendee trust is being safeguarded. Stova, for example, is committed to ensuring organizers retain full control over their data while meeting the industry’s most rigorous security standards.

The message for 2026 is clear: cybersecurity is not a one-time investment, but an ongoing discipline. By combining strong partners, secure systems and well-trained teams, event professionals can keep on top of the changing trends and maintain the trust of attendees and other stakeholders using their technology.

Table of Contents
    Add a header to begin generating the table of contents

    Ready to learn more?

    Whether your event is virtual, hybrid, or in-person, enhance your attendee’s journey with an event ecosystem built for your audience. Ready to walk through Stova's event technology solutions? Schedule some time with us today.