General discussion

Help us build the World's largest SuperComputer

By marketing78 ·
Tags: Cloud
Hi everyone,

I’m the founder of Good Ai Lab and we have just launched our new product Cluster One- https://clusterone.com/ . It’s a very big project that heavily depends on a community of people being involved and I would love to get your feedback on it. At Cluster One we are trying to help advance science by building the world's largest AI supercomputer.

We understand how much computing power is wasted every day (around 10 billions hours!) and we feel that with our expertise, and if we all join together, we could really make a difference in advancing scientific research.

The product has just launched this week and so I would love your feedback on the site to understand if everything makes sense and would it be something you would want to try, and if not what would stop you?
1 total post (Page 1 of 1)  
| Thread display: Collapse - | Expand +

All Comments

Collapse -

More updates

by marketing78 In reply to Help us build the World's ...

Just to give you more update on the project - Cluster One is now up and running.

Here, I like to explain why the problem of scientific research is important, what we can achieve and our “master plan” to get there.

AI will be the key to unlocking new scientific discoveries. A year ago, top cancer researchers reported to President Obama about the state of cancer research, and most of their recommendations mentioned large scale computing as a way to move the industry forward.

AI is not affordable for every company and we see ourselves as solving a part of that problem. Organizations such as the Allen institute are advancing and spreading algorithms. Companies such as Andrew Ng’s deeplearning.ai are trying to spread deep learning skills.

What’s missing in that picture is spreading affordable infrastructure and tools. That’s why we are launching Cluster One.

We want to enable researchers to address life-threatening problems, by scaling AI to the next level.

It is important to understand what size we need to reach before being able to do something meaningful.
For example, take Diabetic Retinopathy, a disease that affects people with diabetes, and can ultimately cause blindness. It affects nearly 100 Million people in the world.
For the sake of understanding what it would take to offer a screening solution through AI, let us assume the following.
- explore 100 ideas
- run 50 experiments per idea
- run each of them for a week of computation, on 50 machines

That’s a total of 42MM compute hours.

That would cost around $10MM on the public cloud (eg: on AWS’s c4.2xlarge), or several dozens of millions of upfront investment for a private infrastructure.

Or it could be provided by 15,000 contributors who provide 8 hours of compute a day for a year, on recent computers.

That’s why we believe in the power of distributed computing and we’re on a mission to scale AI to enable researchers push science further. Feel free to reach out to me if you have any further questions or would like to know more about the movement.

Read my full article on Medium here: https://medium.com/@mhejrati/announcing-cluster-one-the-largest-ai-supercomputer-3abff76a0bb2
Or join Cluster One community: https://clusterone.com/

Back to Cloud Forum
1 total post (Page 1 of 1)  

Related Discussions

Related Forums