Cloudflare announced on May 15, 2023 a new suite of zero-trust security tools for companies to leverage the benefits of AI technologies while mitigating risks. The company integrated the new technologies to expand its existing Cloudflare One product, which is a secure access service edge zero trust network-as-a-service platform.
The Cloudflare One platform’s new tools and features are Cloudflare Gateway, service tokens, Cloudflare Tunnel, Cloudflare Data Loss Prevention and Cloudflare’s cloud access security broker.
“Enterprises and small teams alike share a common concern: They want to use these AI tools without also creating a data loss incident,” Sam Rhea, the vice president of product at Cloudflare, told TechRepublic.
He explained that AI innovation is more valuable to companies when they help users solve unique problems. “But that often involves the potentially sensitive context or data of that problem,” Rhea added.
- What’s new in Cloudflare One: AI security tools and features
- The global SASE and SSE market and its leaders
- The benefits and the risks for companies using AI
- Cloudflare’s swift response to AI
- What’s next for Cloudflare in AI security
What’s new in Cloudflare One: AI security tools and features
With the new suite of AI security tools, Cloudflare One now allows teams of any size to safely use the excellent tools without management headaches or performance challenges. The tools are designed for companies to gain visibility into AI and measure AI tools’ usage, prevent data loss and manage integrations.
With Cloudflare Gateway, companies can visualize all the AI apps and services employees are experimenting with. Software budget decision-makers can leverage the visibility to make more effective software license purchases.
In addition, the tools give administrators critical privacy and security information, such as internet traffic and threat intelligence visibility, network policies, open internet privacy exposure risks and individual devices’ traffic (Figure A).
Some companies have realized that in order to make generative AI more efficient and accurate, they must share training data with the AI and grant plugin access to the AI service. For companies to be able to connect these AI models with their data, Cloudflare developed service tokens.
Service tokens give administrators a clear log of all API requests and grant them full control over the specific services that can access AI training data (Figure B). Additionally, it allows administrators to revoke tokens easily with a single click when building ChatGPT plugins for internal and external use.
Once service tokens are created, administrators can add policies that can, for example, verify the service token, country, IP address or an mTLS certificate. Policies can be created to require users to authenticate, such as completing an MFA prompt before accessing sensitive training data or services.
Cloudflare Tunnel allows teams to connect the AI tools with the infrastructure without affecting their firewalls. This tool creates an encrypted, outbound-only connection to Cloudflare’s network, checking every request against the configured access rules (Figure C).
Cloudflare Data Loss Prevention
While administrators can visualize, configure access, secure, block or allow AI services using security and privacy tools, human error can also play a role in data loss, data leaks or privacy breaches. For example, employees may accidentally overshare sensitive data with AI models by mistake.
Cloudflare Data Loss Prevention secures the human gap with pre-configured options that can check for data (e.g., Social Security numbers, credit card numbers, etc.), do custom scans, identify patterns based on data configurations for a specific team and set limitations for special projects.
Cloudflare’s cloud access security broker
In a recent blog post, Cloudflare explained that new generative AI plugins such as those offered by ChatGPT provide many benefits but can also lead to unwanted access to data. Misconfiguration of these applications can cause security violations.
Cloudflare’s cloud access security broker is a new feature that gives enterprises comprehensive visibility and control over SaaS apps. It scans SaaS applications for potential issues such as misconfigurations and alerts companies if files are accidentally made public online. Cloudflare is working on new CASB integrations, which will be able to check for misconfigurations on new popular AI services such as Microsoft’s Bing, Google’s Bard or AWS Bedrock.
The global SASE and SSE market and its leaders
Secure access service edge and security service edge solutions have become increasingly vital as companies migrated to the cloud and into hybrid work models. When Cloudflare was recognized by Gartner for its SASE technology, the company detailed in a press release the difference between both acronyms by explaining SASE services extend the definition of SSE to include managing the connectivity of secured traffic.
The SASE global market is poised to continue growing as new AI technologies develop and emerge. Gartner estimated that by 2025, 70% of organizations that implement agent-based zero-trust network access will choose either a SASE or a security service edge provider.
Gartner added that by 2026, 85% of organizations seeking to procure a cloud access security broker, secure web gateway or zero-trust network access offerings will obtain these from a converged solution.
Cloudflare One, which was launched in 2020, was recently recognized as the only new vendor to be added to the 2023 Gartner Magic Quadrant for Security Service Edge. Cloudflare was identified as a niche player of the Magic Quadrant with a strong focus on network and zero trust. The company faces strong competition from leading companies, including Netskope, Skyhigh Security, Forcepoint, Lookout, Palo Alto Networks, Zscaler, Cisco, Broadcom and Iboss.
The benefits and the risks for companies using AI
Cloudflare One’s new features respond to the increasing demands for AI security and privacy. Businesses want to be productive and innovative and leverage generative AI applications, but they also want to keep data, cybersecurity and compliance in check with built-in controls over their data flow.
A recent KPMG survey found that most companies believe generative AI will significantly impact business; deployment, privacy and security challenges are top-of-mind concerns for executives.
About half (45%) of those surveyed believe AI can harm their organizations’ trust if the appropriate risk management tools are not implemented. Additionally, 81% cite cybersecurity as a top risk, and 78% highlight data privacy threats emerging from the use of AI.
From Samsung to Verizon and JPMorgan Chase, the list of companies that have banned employees from using generative AI apps continues to increase as cases reveal that AI features can leak sensible business data.
AI governance and compliance are also becoming increasingly complex as new laws like the European Artificial Intelligence Act gain momentum and countries strengthen their AI postures.
“We hear from customers concerned that their users will ‘overshare’ and inadvertently send too much information,” Rhea explained. “Or they can share sensitive information with the wrong AI tools and wind up causing a compliance incident.”
Despite the risks, the KPMG survey reveals that executives still view new AI technologies as an opportunity to increase productivity (72%), change the way people work (65%) and encourage innovation (66%).
“AI holds incredible promise, but without proper guardrails, it can create significant risks for businesses,” Matthew Prince, the co-founder and chief executive officer of Cloudflare, said in the press release. “Cloudflare’s Zero Trust products are the first to provide the guard rails for AI tools, so businesses can take advantage of the opportunity AI unlocks while ensuring only the data they want to expose gets shared.”
Cloudflare’s swift response to AI
The company released its new suite of AI security tools at an incredible speed, even as the technology is still taking shape. Rhea talked about how Cloudflare’s new suite of AI security tools was developed, what the challenges were and if the company is planning for upgrades.
“Cloudflare’s Zero Trust tools build on the same network and technologies that power over 20% of the internet already through our first wave of products like our Content Delivery Network and Web Application Firewall,” Rhea said. “We can deploy services like data loss prevention (DLP) and secure web gateway (SWG) to our data centers around the world without needing to buy or provision new hardware.”
Rhea explained that the company can also reuse the expertise it has in existing, similar functions. For example, “proxying and filtering internet-bound traffic leaving a laptop has a lot of similarities to proxying and filtering traffic bound for a destination behind our reverse proxy.”
“As a result, we can ship entirely new products very quickly,” Rhea added. “Some products are newer — we introduced the GA of our DLP solution roughly a year after we first started building. Others iterate and get better over time, like our Access control product that first launched in 2018. However, because it is built on Cloudflare’s serverless computer architecture, it can evolve to add new features in days or weeks, not months or quarters.”
What’s next for Cloudflare in AI security
Cloudflare says it will continue to learn from the AI space as it develops. “We anticipate that some customers will want to monitor these tools and their usage with an additional layer of security where we can automatically remediate issues that we discover,” Rhea said.
The company also expects its customers to become more aware of the data storage location that AI tools used to operate. Rhea added, “We plan to continue to ship new features that make our network and its global presence ready to help customers keep data where it should live.”
The challenges remain twofold for the company breaking into the AI security market, with cybercriminals becoming more sophisticated and customers’ needs shifting. “It’s a moving target, but we feel confident that we can continue to respond,” Rhea concluded.