Data center - AI-Tech Park https://ai-techpark.com AI, ML, IoT, Cybersecurity News & Trend Analysis, Interviews Wed, 07 Aug 2024 11:29:42 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.16 https://ai-techpark.com/wp-content/uploads/2017/11/cropped-ai_fav-32x32.png Data center - AI-Tech Park https://ai-techpark.com 32 32 Balancing Brains and Brawn: AI Innovation Meets Sustainable Data Center Management https://ai-techpark.com/balancing-brains-and-brawn/ Wed, 07 Aug 2024 12:30:00 +0000 https://ai-techpark.com/?p=175580 Explore how AI innovation and sustainable data center management intersect, focusing on energy-efficient strategies to balance performance and environmental impact. With all that’s being said about the growth in demand for AI, it’s no surprise that the topics of powering all that AI infrastructure and eking out every ounce of...

The post Balancing Brains and Brawn: AI Innovation Meets Sustainable Data Center Management first appeared on AI-Tech Park.

]]>
Explore how AI innovation and sustainable data center management intersect, focusing on energy-efficient strategies to balance performance and environmental impact.

With all that’s being said about the growth in demand for AI, it’s no surprise that the topics of powering all that AI infrastructure and eking out every ounce of efficiency from these multi-million-dollar deployments are hot on the minds of those running the systems.  Each data center, be it a complete facility or a floor or room in a multi-use facility, has a power budget.  The question is how to get the most out of that power budget?

Key Challenges in Managing Power Consumption of AI Models

High Energy Demand: AI models, especially deep learning networks, require substantial computational power for training and inference, predominantly handled by GPUs. These GPUs consume large amounts of electricity, significantly increasing the overall energy demands on data centers. AI and machine learning workloads are reported to double computing power needs every six months​. The continuous operation of AI models, processing vast amounts of data around the clock, exacerbates this issue, increasing both operational costs and energy consumption​.  Remember, it’s not just model training, but also inferencing and model experimentation​ which consume power and computing resources.

Cooling Requirements: With great power comes great heat.  In addition to the total power demand increasing, the power density (i.e. kW/rack) is climbing rapidly, necessitating innovative and efficient cooling systems to maintain optimal operating temperatures. Cooling systems themselves consume a significant portion of the energy, with the International Energy Agency reporting that cooling consumed as much energy as the computing! Each function accounted for 40% of data center electricity demand with the remaining 20% from other equipment.​

Scalability and Efficiency: Scaling AI applications increases the need for more computational resources, memory, and data storage, leading to higher energy consumption. Efficiently scaling AI infrastructure while keeping energy use in check is complex​.  Processor performance has grown faster than memory and storage’s ability to feed the processors, leading to the “Memory Wall” as a barrier to deriving high utilization of the processors’ capabilities. Unless the memory wall can be broken, users are left with a sub-optimal deployment of many under-utilized, power-eating GPUs to do the work.

Balancing AI Innovation with Sustainability

Optimizing Data Management: Rapidly growing datasets that are surpassing the Petabyte scale equal rapidly growing opportunities to find efficiencies in handling the data.  Tried and true data reduction techniques such as deduplication and compression can significantly decrease computational load, storage footprint and energy usage – if they are performed efficiently. Technologies like SSDs with computational storage capabilities enhance data compression and accelerate processing, reducing overall energy consumption. Data preparation, through curation and pruning help in several ways – (1) reducing the data transferred across the networks, (2) reducing total data set sizes, (3) distributing part of the processing tasks and the heat that goes with them, and (4) reducing GPU cycles spent on data organization​.

Leveraging Energy-Efficient Hardware: Utilizing domain-specific compute resources instead of relying on the traditional general-purpose CPUs.  Domain-specific processors are optimized for a specific set of functions (such as storage, memory, or networking functions) and may utilize a combination of right-sized processor cores (as enabled by Arm with their portfolio of processor cores, known for their reduced power consumption and higher efficiency, which can be integrated into system-on-chip components), hardware state machines (such as compression/decompression engines), and specialty IP blocks. Even within GPUs, there are various classes of GPUs, each optimized for specific functions. Those optimized for AI tasks, such as NVIDIA’s A100 Tensor Core GPUs, enhance performance for AI/ML while maintaining energy efficiency.

Adopting Green Data Center Practices: Investing in energy-efficient data center infrastructure, such as advanced cooling systems and renewable energy sources, can mitigate the environmental impact. Data centers consume up to 50 times more energy per floor space than conventional office buildings, making efficiency improvements critical. Leveraging cloud-based solutions can enhance resource utilization and scalability, reducing the physical footprint and associated energy consumption of data centers​.

3. Innovative Solutions to Energy Consumption in AI Infrastructure

Computational Storage Drives: Computational storage solutions, such as those provided by ScaleFlux, integrate processing capabilities directly into the storage devices. This localization reduces the need for data to travel between storage and processing units, minimizing latency and energy consumption. By including right-sized, domain-specific processing engines in each drive, performance and capability scales linearly with each drive added to the system. Enhanced data processing capabilities on storage devices can accelerate tasks, reducing the time and energy required for computations​.

Distributed Computing: Distributed computing frameworks allow for the decentralization of computational tasks across multiple nodes or devices, optimizing resource utilization and reducing the burden on any single data center. This approach can balance workloads more effectively and reduce the overall energy consumption by leveraging multiple, possibly less energy-intensive, computational resources.

Expanded Memory via Compute Express Link (CXL): Compute Express Link (CXL) technology is specifically targeted at breaking the memory wall.  It enhances the efficiency of data processing by enabling faster communication between CPUs, GPUs, and memory. This expanded memory capability reduces latency and improves data access speeds, leading to more efficient processing and lower energy consumption. By optimizing the data pipeline between storage, memory, and computational units, CXL can significantly enhance performance while maintaining energy efficiency.

Liquid cooling and Immersion cooling: Liquid cooling and Immersion cooling (related, but not the same!) offer significant advantages over the fan-driven air cooling that the industry has grown up on.  Both offer means of cost-effectively and efficiently dissipating more heat and evening out temperatures in the latest power-dense GPU and HPC systems, where fans have run out of steam. 

In conclusion, balancing AI-driven innovation with sustainability requires a multifaceted approach, leveraging advanced technologies like computational storage drives, distributed computing, and expanded memory via CXL. These solutions can significantly reduce the energy consumption of AI infrastructure while maintaining high performance and operational efficiency. By addressing the challenges associated with power consumption and adopting innovative storage and processing technologies, data centers can achieve their sustainability goals and support the growing demands of AI and ML applications.

Explore AITechPark for top AI, IoT, Cybersecurity advancements, And amplify your reach through guest posts and link collaboration.

The post Balancing Brains and Brawn: AI Innovation Meets Sustainable Data Center Management first appeared on AI-Tech Park.

]]>
AITech Interview with Erling Guðmundsson, COO of atNorth https://ai-techpark.com/aitech-interview-with-erling-gudmundsson-coo-of-atnorth/ Tue, 07 May 2024 13:30:00 +0000 https://ai-techpark.com/?p=164942 Explore how AI impacts digital infrastructure and how atNorth is shaping a sustainable future for data centers in the Nordics.

The post AITech Interview with Erling Guðmundsson, COO of atNorth first appeared on AI-Tech Park.

]]>
Explore how AI impacts digital infrastructure and how atNorth is shaping a sustainable future for data centers in the Nordics.

What impact does AI have on Digital Infrastructure?

There are currently two megatrends that are dominating global business. The first is the explosion in the use of AI as businesses strive to increase productivity and operational efficiencies. The second is the worsening climate crisis and the financial impact of increasing energy costs. 

Data centers that are built to cater for this type of High Performance Computing (HPC) require powerful systems with significant cooling requirements that involve the use of large amounts of energy at considerable costs. The amount of energy required to upgrade legacy data centers to HPC standards is vast. If every data center upgraded to adhere to HPC specifications, the world would experience a serious energy crisis. 

At the intersection of these two mega trends is the data center, and choosing the right one can not only future proof against technological advancements but also significantly reduce a business’s environmental impact. With infrastructure cooling being responsible for 40 percent of the total electricity cost of most data centers, this coupled with ever increasing energy prices have led organizations to consider the physical environment where their data center sites are located. 

Using data centers in countries that have a cool and more consistent climate, such as the Nordic region, have the capability to provide high-density air and liquid cooling solutions that ensure temperature and humidity levels within the data center are maintained more efficiently. These solutions reduce energy outputs and ultimately decrease the overall carbon footprint and cost to the business.

Why are the Nordics emerging as a leading location for the data center industry?

The importance of location is becoming more and more prevalent in data center decision making today. The Nordics are emerging as a safe, sustainable, and superior place to house today’s enterprise data. This is due to several factors – from the region’s abundant access to renewable power resources to its close-knit government community structure that speeds innovation and supports sustainability initiatives like data center heat recovery.

In addition, the Nordic region produces a surplus of renewable energy, made available to global data center operators as a green alternative to carbon-producing sources. Its use of renewable energy is embedded in the region’s history where hydropower, wind energy production, and geothermal sources produce renewable energy for these countries, making it a safe and desirable location for global organizations to house their workloads.

The region also benefits from improved cost stability compared to countries that traditionally rely on fossil fuels, given the price of renewable energy won’t be as affected by economic upheaval, political disruption, or regulatory changes. It is becoming increasingly clear that the Nordics have the right infrastructure in place to continue to help businesses navigate the global energy crisis, drive clean energy transformation and power the next generation of data centers. 

How does atNorth help organizations future-proof their data infrastructure?

We are a fast-moving company in an incredibly dynamic market. We are committed to expanding our footprint and building future-proofed, sustainable data center operations across the Nordics in a way that supports our customers and the surrounding environment.

The Nordic region is developing into a European hub for the data center industry due to key factors such as abundant land resource and space, reliable power supply, international connectivity and low energy prices. This has been driving a surge in organizations moving their HPC and AI workloads to the Nordics. While many other markets are constrained with site availability, facility and rack space, and increasing energy costs, the Nordics is a safe haven offering efficiencies, performance and a perfect climate to deploy within.  

Clients such as BNP Paribas and Shearwater Geoservices have migrated portions of their IT workload to atNorth data centers in Iceland, with BNP Paribas achieving a reduction in energy consumption by more than 50%, and an 85% decrease in CO2 output and Shearwater Geoservices achieving a 92% reduction in CO2 output and an 85% reduction in cost. These businesses have future-proofed themselves against increasing energy demands and the associated costs, not to mention adherence to ESG targets and ever evolving technology requirements.

How is atNorth addressing the increased demand for HPC & AI workloads?

HPC is a fast growing market – it’s estimated that the high performance computing market could be worth $90 billion by 2032. We see huge, untapped potential in this space to provide better services for data-heavy organizations that rely on HPC for their most critical business operations. This is why we recently announced the acquisition of Gompute, as we continue to fast forward our business with a full-stack solution specifically tailored for HPC & AI workloads to make sure we are at the top of our game to service our customers. 

Similarly, we are currently building three new data center campuses; DEN01, a 30MW data center in Copenhagen, Denmark and FIN02, a 25 MW data center in Helsinki, Finland and FIN04 a megasite in Kouvola, Finland. The FIN04 site will have an initial capacity of 60MW but includes a pathway to power to several hundred megawatts when fully built.

All three sites are specifically designed for high performance workloads and are located on sizable plots to allow atNorth to scale effectively with client demand.

Our goal is to continue to strengthen our business and provide a pathway for continued innovation – from the solutions and services we offer to the expertise and skill we bring to our workforce.

What are the new innovations driving change at the data center development level?

Clean energy, cheaper power alternatives, low power usage effectiveness (PUE), and heat reuse advantages to name a few – all of which the Nordic region has become synonymous with. Add to this: better connectivity, lower latency, and larger capacity availability across rack and cabinet space and cooling capabilities such as direct liquid cooling (DLC), immersion, rear-door cooling, and geothermal cooling techniques.

Another factor central to supporting power and cooling efficiencies is the deployment of heat recovery. For example, atNorth’s data center campus in Stockholm, Sweden is a first-of-its-kind data center with a primary cooling system designed for heat recovery. The SWE01 campus captures the heat outputs generated by the site’s data halls, capturing up to 85% of the outputs in some cases. This innovative use of recycling the waste outputs from data center facilities through nearby energy grids can provide heat and hot water for surrounding communities in a way that reduces the carbon footprint and bottom line for data centers and organizations alike while also giving back to the community in a sustainable manner.

Our SWE01 campus is a good example of a purpose-built facility that has been designed and architected from the ground up with the space and capacity available that supports the needs of organizations right now while also enabling them to have the flexibility to scale up or down in line with business peaks and troughs. As organizations look to reduce operational costs, lower their overall TCO, and meet sustainability initiatives, the Nordics are proving a reliable home, providing the best possible infrastructure to support digitalization and drive an environment-first approach to sustain increasingly crucial sustainability initiatives. 

How can we future proof the next generation of data center infrastructure with data growth, high performance compute, and the environment top of mind?

We are only at the cusp of what technology and digitalization is capable of – first and foremost, we need to align as an industry to create meaningful changes that will stand the test of time and innovation. The data center industry is clearly a fundamental part of our global society and integral to our lives today – from social media and streaming media to working processes and even much greater capabilities such as prediction analytics and driverless cars!

We need to pull together to create common goals for our industry. While there is certainly a lot of competition here, and we all want to be successful in our own right, there are several aspects of our businesses that should be collaborative so that we can help shape the overall outcomes and future of the industry.

Those of us that work in this industry know that we are lucky – we have the opportunity to shape the future of the digital world in a way that can support our planet for future generations to come. But with this, we also have a duty to make it the best it can be, to reduce the burden of it on the world, and to make it more efficient by collaborating and sharing the elements that will enable improvements to be made.

At atNorth, we really believe that we can have more compute and a better world if we work together

Erling Guðmundsson

COO, atNorth

Erling joined atNorth’s executive team in 2023 bringing 25 years of experience in the telecom and IT industries across Europe. An award winning and inspirational leader, he is renowned for maintaining high professional standards and Erling joined atNorth’s executive team in 2023 bringing 25 years of experience in the telecom and IT industries across Europe. An award winning and inspirational leader, he is renowned for maintaining high professional standards and progressing high-performance teams to deliver exceptional customer service. Before joining atNorth, he served as the CEO of Reykjavik Fibre/​Ljósleiðarinn, where his leadership left a lasting impact. His vast experience and unique skill set promotes operational excellence and drives the continued growth and success of the company.

The post AITech Interview with Erling Guðmundsson, COO of atNorth first appeared on AI-Tech Park.

]]>
Top Trends in Cybersecurity, Ransomware and AI in 2024 https://ai-techpark.com/top-trends-in-cybersecurity-ransomware-and-ai-in-2024/ Wed, 14 Feb 2024 12:30:00 +0000 https://ai-techpark.com/?p=154677 Explore key trends, prevention strategies, and the evolving landscape of AI-driven cybersecurity. According to research from VMware Carbon Black, ransomware attacks surged by 148% during the onset of the Covid-19 pandemic, largely due to the rise in remote work. Key trends influencing the continuing upsurge in ransomware attacks include: Exploitation...

The post Top Trends in Cybersecurity, Ransomware and AI in 2024 first appeared on AI-Tech Park.

]]>
Explore key trends, prevention strategies, and the evolving landscape of AI-driven cybersecurity.

According to research from VMware Carbon Black, ransomware attacks surged by 148% during the onset of the Covid-19 pandemic, largely due to the rise in remote work. Key trends influencing the continuing upsurge in ransomware attacks include:

  • Exploitation of IT outsourcing services: Cybercriminals are targeting managed service providers (MSPs), compromising multiple clients through a single breach.
  • Vulnerable industries under attack: Healthcare, municipalities, and educational facilities are increasingly targeted due to pandemic-related vulnerabilities.
  • Evolving ransomware strains and defenses: Detection methods are adapting to new ransomware behaviors, employing improved heuristics and canary files, which serve as digital alarms, deliberately placed in a system or to entice hackers or unauthorized users.
  • Rise of ransomware-as-a-service (RaaS): This model enables widespread attacks, complicating efforts to counteract them. According to an independent survey by Sophos, average ransomware payouts have escalated from $812,380 in 2022 to $1,542,333 in 2023.

Preventing Ransomware Attacks 

To effectively tackle the rising threat of ransomware, organizations are increasingly turning to comprehensive strategies that encompass various facets of cybersecurity. One key strategy is employee education, fostering a culture of heightened awareness regarding potential cyber threats. This involves recognizing phishing scams and educating staff to discern and dismiss suspicious links or emails, mitigating the risk of unwittingly providing access to malicious entities.

In tandem with employee education, bolstering the organization’s defenses against ransomware requires the implementation of robust technological measures. Advanced malware detection and filtering systems play a crucial role in fortifying both email and endpoint protection. By deploying these cutting-edge solutions, companies can significantly reduce the chances of malware infiltration. Additionally, the importance of fortified password protocols cannot be overstated in the battle against ransomware. Two-factor authentication and single sign-on systems provide formidable barriers, strengthening password security and rendering unauthorized access substantially more challenging for cybercriminals.

An often overlooked yet critical component of ransomware mitigation involves the establishment of immutable, offsite backups. Employed in tandem with regularly practiced restoration procedures, these backups safeguard against data loss in the event of a ransomware attack. Additionally, coupling these backup strategies with robust data loss prevention software serves as a formidable defense, limiting the impact of potential data exfiltration attempts. By integrating these multifaceted strategies, organizations can construct a more resilient defense against ransomware threats, emphasizing proactive measures to mitigate risks rather than merely reacting to potential attacks.

Preparing for Data Center Security Threats in 2024

Data centers, rich in critical data, are prime targets for cybercriminals. Despite robust security measures, vulnerabilities persist. IT professionals are urged to bolster defenses in data center operations, potentially incorporating hardware-based solutions alongside software defenses.


To prepare for the top security threats of 2024, IT leaders must prioritize fortifying data centers against increasing ransomware and cyberattacks. Recognizing data centers as repositories of valuable personal, financial and intellectual data, it’s essential to enhance existing security measures. The focus is shifting toward integrating robust, hardware-based approaches alongside software defenses to reinforce digital barriers against cybercriminals.

This strategic shift is underscored by the emergence of hardware-based root of trust (RoT) systems that heavily rely on artificial intelligence (AI) technologies. As cyber threats evolve, AI algorithms have become crucial in processing and assimilating vast amounts of threat intelligence into actionable data. These systems, operated through trusted control/compute units (TCUs), offer advanced management controls at the core hardware level, enhancing the Zero Trust practices beyond current capabilities.

The increasing focus on cybersecurity from government and industry leaders further emphasizes the need for more secure networks. The next generation of hardware-anchored, AI-driven security platforms promises to establish a more robust Zero Trust architecture. This approach not only strengthens key storage and management but also assures a more secure future for data centers, thereby safeguarding digital communications across various sectors. The integration of effective AI-enabled Zero Trust practices provides a critical step in addressing the complex cybersecurity challenges that lie ahead.

The Future of Cybersecurity is AI-Driven Platforms

AI-driven security platforms are shaping the future of data network security. These platforms facilitate advanced key storage and management, ensuring more robust Zero Trust architectures. 

A new generation of hardware-based root of trust, employing AI technology, is critical to the future of cybersecurity. AI algorithms can effectively process threat intelligence data, enhancing Zero Trust practices at a fundamental hardware level. This approach addresses the dynamic nature of the threat landscape. 

AI in Cybersecurity is a Double-Edged Sword

AI plays a significant role in cybersecurity, and its formidable power can be wielded both defensively and for nefarious purposes. The application of AI in threat detection, relying on machine learning (ML), stands as a cornerstone in identifying and preempting potential risks. However, the accessibility of AI-powered hacking tools has empowered less sophisticated cybercriminals, enabling them to orchestrate advanced attacks with increasingly greater ease. The growing accessibility of AI tools and applications underscores the challenge of securing intelligent systems against potential exploitation, emphasizing the paramount need to fortify their defenses to prevent misuse and manipulation by malicious actors.

Future Perspectives on AI and Cybersecurity

Now and in the future, AI technology can be used to alleviate the cybersecurity workforce shortage by automating threat detection. It also has potential in training cybersecurity professionals and enhancing skill development in areas like code reverse-engineering.

As the cybersecurity landscape evolves, organizations must adapt their strategies to combat emerging threats. Emphasizing employee training, robust technology defenses, and the innovative use of AI are crucial steps. Simultaneously, the industry must remain vigilant against the misuse of AI, ensuring that cybersecurity defenses stay ahead of ever-evolving threats.

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

SalesmarkGlobal

The post Top Trends in Cybersecurity, Ransomware and AI in 2024 first appeared on AI-Tech Park.

]]>
AITech Interview with Charles Fan, Founder, and CEO at MemVerge https://ai-techpark.com/aitech-interview-with-charles-fan/ Wed, 22 Feb 2023 13:30:00 +0000 https://ai-techpark.com/?p=110060 Modern applications are generating bigger amounts of data that need to be processed at a faster speed. How is Charles bringing in this change in the data space?

The post AITech Interview with Charles Fan, Founder, and CEO at MemVerge first appeared on AI-Tech Park.

]]>
Modern applications are generating bigger amounts of data that need to be processed at a faster speed. How is Charles bringing in this change in the data space?

1.Charles, please tell us about yourself and the legacy that you’ve created.

I’ve been an entrepreneur and innovator in data center infrastructure for more than 20 years. At the first company I founded, Rainfinity, we created a file virtualization product called RainStorage. Rainfinity was subsequently acquired by EMC. While I was at EMC I founded the EMC R&D Center in China. Later at VMware I founded the VMware storage business unit and led the creation of the Virtual SAN or vSAN solution, a leading software-defined storage product that grew into a billion-dollar product. The most recent company that I founded, MemVerge, is revolutionizing data centers by unleashing the true potential of memory, or what we call Big Memory Computing

2.Give us a brief overview of MemVerge and how it came into existence.

Modern applications are generating bigger amounts of data that need to be processed at a faster speed. The existing storage architecture can no longer meet this need. We predicted that there will be new memory hardware innovations both in media and in interconnect that will enable a new memory-centric infrastructure. We started MemVerge to be on the forefront of this trend and create the software necessary to enable this new architecture.

MemVerge delivered the industry’s first commercial memory virtualization software, Memory Machine, which provides a number of industry breakthroughs such as memory tiering, pooling, and snapshot-based in-memory data management.
This makes a number of things possible including replication, roll-back, autosave, thin clones, and instant recovery.

3.What are the core values upon which the organization is built? What are its vision and mission?

Our vision at MemVerge is that all applications will one day live in big memory. Our mission, the way we get there, is to open the door to Big Memory Computing through our Memory Machine software. 

When we started the company, I wrote a memo about our “OPEN culture” consisting of the following traits: Original, Positive, Externally-Driven, Now, and of course Open. Let me elaborate. The memo observes that all of us at MemVerge come from different backgrounds but are attracted by a startup culture. 

First, our startup culture is Original. We are here to invent, not to copy, not to follow the status quo. We aim to be original, to deliver to our customers unique values not available from others before.

Second, we are Positive. We strive to change the world and have fun doing it. With a supportive working environment, our goal is that every MemVerge employee wake up every workday morning in a good mood, and they can’t wait to go to work.

The third trait is Externally-Driven. This means customers guide us. We will create innovations useful for them. Despite the presence of internal demands, we prioritize external demands from customers and partners, and we encourage everyone at MemVerge to be an advocate for our customers.

The fourth trait, Now, is built on the recognition that larger companies have infinitely more resources than we do, but we have speed on our side. We have a sense of urgency in everything we do, so whenever possible, we shorten execution loops, reduce dependencies, and optimize for speed. 

Perhaps most important is the umbrella trait of a truly OPEN MemVerge culture. We strive to ensure open communication, so that everyone is free to speak their mind. In technical discussions everyone is equal and may the best idea win. All perspectives are welcome and will be considered in the decision-making process. With this OPEN culture, we will be on the same MemVerge team, working together towards the same goal.

4.Tell us more about MemVerge’s distinguished product suite. What makes MemVerge different from its competitors?

MemVerge Memory Machine Cloud Edition and MemVerge Memory Viewer help our customers solve their immediate memory challenges. As the industry’s new Compute Express Link (CXL) interface gets ready for take-off, Memory Machine and Memory Viewer are also the first memory auto-tiering software suite to support CXL. Working with our hardware partners, we have taken the first step towards CXL pooled memory.

Memory Machine Cloud Edition uses patented ZeroIO memory snapshot technology and cloud service orchestration to transparently checkpoint long-running applications and allow customers to safely use low-cost Spot Instances. Organizations can reduce cloud cost by up to 70%. Over time, Memory Machine Cloud Edition will form the basis of an infrastructure cloud service enabling applications to run across a multi-cloud environment.

Memory Viewer software provides system administrators with actionable information about DRAM, their most expensive and under-utilized asset. The average utilization of DRAM in hyperscaler data centers is approximately 40 percent, and the cost of memory is half of the cost of a server. As the world enters the CXL era of peta-scale pooled memory, better visibility into the health, capacity and performance of memory infrastructure will become indispensable. Memory Viewer topology maps and heat maps provide system administrators new insights into their memory infrastructure. We provide Memory Viewer free for download

5.What is the most exciting part and the most intimidating part about being an entrepreneur? What are the major challenges that you face while bringing your vision to reality?

Being an entrepreneur gives me the opportunity to follow my passion, and let my creativity fly.  And if I am lucky, that creativity will enable technology that makes people’s lives better. Seeing our labor bringing smiles to our customers is one of the best feelings in the world.

Let me share one example. At MemVerge, our in-memory snapshot technology can accelerate genomics workloads, and help our customers find cures to diseases faster. What used to take a couple of weeks to run, will now finish in a couple of days. That is exciting!

At the same time, starting your own venture is very challenging, and can be a humbling experience. When I started my first startup, Rainfinity, we ran into a wall when the internet bubble burst in 2001. Our customers evaporated, and our partners disappeared. Our investors were all hit hard, so the company was in an existential crisis. Two of our best engineers and I started working on a new product concept: storage virtualization, which is completely different from our previous product. The three of us started talking to a lot of customers, and raced with time to create this product to solve their pain point. In nine months, we got these customers onboard right before we ran out of money, and the company survived. 

6.What qualities do you think are essential to make a good leader and entrepreneur? How do you approach your role as a leader?

It may sound obvious, but the ability to think is a critical quality in a leader. People in a technology field can go a long way if they can have an opinion, a perspective, and a vision of things beyond the average person. I enjoy thinking about things and having an opinion on where the world of technology is going.

The second quality is the ability to connect with other people. You must have a natural tendency to connect with your employees, your colleagues, your customers, your partners, and your investors. As an introverted person, I’ve had to work somewhat on this.

Finally a strong technical background has helped me a great deal. I have my fair share of weaknesses but having a strong understanding of technology has been a good trait in my industry. At MemVerge, we have built a product team of people with a very deep understanding of the technology in our space. Their depth of education and technical knowledge has allowed them to tackle tough challenges. This deeper technical background has allowed MemVerge to blaze trails beyond “me-too” products and take us where no one has gone before

7.What advice would you give to budding entrepreneurs and enthusiasts aspiring to venture into the industry?

Make sure you have a deep understanding of the market you are entering. When I founded Rainfinity, my role was primarily as a technical founder, and I did not have a strong concept of the market size for our solution. Our first product was a cool technology, and it was successful, but its market potential was limited, and so our success was limited as a result. It’s easy to focus on how cool your technology is, but the size of the market is really important, especially for technical startups

8.How do you plan to scale the growth of MemVerge in the coming years? Can you give us a sneak peek into the new projects that are currently in the pipeline of your company?

We are ramping up with software solutions that are addressing the emerging Compute Express Link (CXL) interface, which will be a true game changer to computing as we know it. At the same time, our Memory Machine Cloud Edition is finding new ways to help customers reduce their skyrocketing cloud cost

9.Being a leader, what tech trends do you think will shape the future of the industry?

CXL is a huge one. It’s an open standard for processors, memory expansion and accelerators and it will be a true game changer for data center architecture and enable truly disaggregated and composable infrastructure. This new technology has been embraced by industry leaders including Intel, AMD, ARM, Samsung, SK Hynix and Micron.

This new memory landscape opens a completely new world of opportunities for MemVerge and our software to address. Just as Fibre Channel enabled storage area networks for the first time, CXL has the potential to enable memory fabric, and our strategy is to develop MemVerge software solutions to expand, tier, pool and share memory over this fabric.

10.How do you spend your downtime? What hobbies do you enjoy pursuing?

I enjoy playing badminton and table tennis and watching the 49ers and Warriors!

Dr. Charles Fan

CEO and Co-founder, MemVerge

Charles Fan is co-founder and CEO of MemVerge. Prior to MemVerge, Charles was the CTO of Cheetah Mobile leading its global technology teams, and an SVP/GM at VMware, founding the storage business unit that developed the Virtual SAN product. Charles also worked at EMC and was the founder of the EMC China R&D Center. Charles joined EMC via the acquisition of Rainfinity, where he was a co-founder and CTO. Charles received his Ph.D. and M.S. in Electrical Engineering from the California Institute of Technology, and his B.E. in Electrical Engineering from the Cooper Union.

The post AITech Interview with Charles Fan, Founder, and CEO at MemVerge first appeared on AI-Tech Park.

]]>