Best of CTCCC 2024: Deepfake Video and Image Detection, AI at the Edge, and Taking AI to the Next Level

The 7th Annual Conference on Information Science and Systems and the 5th Communication Technologies and Cloud Computing Conference took place in Edinburgh, Scotland on August 14–16, 2024. Our contributing writer, Drew Robb, attended in person.

In this TechRepublic Premium feature, read some of the highlights of an event that included the manipulation of images and deepfake videos and their detection, how artificial intelligence can transition to the edge of the network, and how compound AI can take AI to the next level.

    Featured text from the download:

    AI at the Edge

    Generative AI typically involves vast amounts of investment in massive systems. Apparently, it costs as much as $700,000 a day to keep OpenAI’s ChatGPT operating. Hence, only a relatively small group of hyperscalers and well-funded startups are involved in large-scale development and training of large language models). As AI evolves, there is a clear need for these applications to be more broadly distributed. For gen AI to realize its full potential, it needs to be available anywhere — not just via a few cloud providers.

    Thus, the concept of AI at the edge is emerging — AI applications and LLMs running on smartphones and tablets, inside automobiles, or on the factory floor. The benefits envisioned include lowering the cost of model training, fewer security or privacy issues as the data is locally contained, and lower latency as the cloud is not involved.

    Prof. Tasos Dagiuklas, Leader of the Smart Internet Technologies Research Group at London South Bank University in the U.K., looked into this area during a session on AI at the edge at CTCCC 2024. He detailed how different workloads could be deployed efficiently across distributed cloud computing infrastructure to reduce the burden associated with a centralized cloud architecture. This is achieved, he said, by decomposing many of the functions involved in gen AI. Instead, these functions are divided into virtual function chains, which is a good way to get around the computational and communication limitations that frequently hold large AI engines back. In other words, do a good portion of AI computation at the edge and do it in such a way as to be able to run on relatively simple and inexpensive hardware.

Keep up with the latest AI news with our in-depth 12-page PDF report. This is available for download at just $9. Alternatively, enjoy complimentary access with a Premium annual subscription.

Crafting this content required 22 hours of dedicated writing, editing, research, and design.

Subscribe to the TechRepublic Premium Exclusives Newsletter

Save time with the latest TechRepublic Premium downloads, including customizable IT & HR policy templates, glossaries, hiring kits, features, event coverage, and more. Exclusively for you! Delivered Tuesdays and Thursdays.

Subscribe to the TechRepublic Premium Exclusives Newsletter

Save time with the latest TechRepublic Premium downloads, including customizable IT & HR policy templates, glossaries, hiring kits, features, event coverage, and more. Exclusively for you! Delivered Tuesdays and Thursdays.

Resource Details

or

* Sign up for a TechRepublic Premium subscription for $299.00/year, and download this content as well as any other content in our library. Cancel anytime. Details here.

Provided by:
TechRepublic Premium
Published:
August 15, 2024
Topic:
TechRepublic Premium
Format:
PDF
or

* Sign up for a TechRepublic Premium subscription for $299.00/year, and download this content as well as any other content in our library. Cancel anytime. Details here.