Generative AI and the wizardry of the wide-open ecosystem

    0
    0


    Commissioned IT leaders face several challenges navigating the growing generative AI ecosystem, but choosing a tech stack that can help bring business use cases to fruition is among the biggest.

    The number of proprietary and open-source models is growing daily, as are the tools designed to support them.

    To understand the challenge, picture IT leaders as wizards sifting through a big library of magical spells (Dumbledore may suffice for many), each representing a different model, tool or technology. Each shelf contains different spell categories, such as text generation, image or video synthesis, among others.

    Spell books include different diagrams, incantations and instructions just as GenAI models contain documentation, parameters and operational nuances. GPT-4, Stable Diffusion and Llama 2 rank among the most well-known models, though many more are gaining traction.

    Moreover, this “library” is constantly growing, making it harder for IT leaders to keep up with the frenetic pace of conjuring – er – innovation. Kind of like chasing after a moving staircase.

    You get the idea. If you’re unsure, you can brush up on Harry Potter books or films. In the meantime, here are three key steps to consider as you begin architecting your AI infrastructure for the future.

    Pick models and modular architecture

    As IT leaders adopted more public cloud software they realized that a witches’ brew of licensing terms, proprietary wrappers and data gravity made some of their applications tricky and expensive to move. These organizations had effectively become locked into the cloud platforms, whose moats were designed to keep apps inside the castle walls.

    If you believe that GenAI is going to be a critical workload for your business – 70 percent of global CEOs told PwC it will change the way their businesses create, deliver and capture value – then you must clear the lock-in hurdle. One way to do this is to pick an open model and supporting stack that affords you flexibility to jump to new products that better serve your business.

    Tech analyst Tim Andrews advocates for reframing your mindset from predicting product “winners” to one that allows you to exit as easily as possible. And a modular software architecture in which portions of your systems are isolated can help.

    Fortunately, many models will afford you flexibility and freedom. But tread carefully; just as spell books may harbor hidden curses, some models may drain the organization’s resources or introduce biases or hallucinations. Research the models and understand the trade-offs.

    Choose infrastructure carefully

    GPU powerhouse NVIDIA believes that most large corporations will stand up their own AI factories, essentially datacenters dedicated to running only AI workloads that aim to boost productivity and customer experience. This will be aspirational for all but the companies who have the robust cash flow to build these AI centers.

    Public cloud models will help you get up and running quickly, but, if right-sizing your AI model and ensuring data privacy and security are key, an on-premises path may be right for you. Your infrastructure is the magic wand that enables you to run your models. What your wand is made of matters, too.

    In the near term, organizations will continue to run their AI workloads in a hybrid or multicloud environment that offers flexibility of choice while allowing IT leaders to pick operating locations based on performance, latency, security and other factors. The future IT architecture is multicloud-by-design, leveraging infrastructure and reference designs delivered as-a-Service. Building for that vision will enable you to run your GenAI workloads in a variety of places.

    Know this: With organizations still evaluating or piloting GenAI models, standardization paths have yet to emerge. As they build, you must take care to head off technical debt as much as possible.

    Embrace the wide-open ecosystem

    While wizards may spend years mastering their spells, IT leaders don’t have that luxury. Eighty-five percent of C-suite executives said they expect to raise their level of AI and GenAI investments in 2024, according to Boston Consulting Group.

    It’s incumbent on IT leader to help business stakeholders figure out how to create value from their GenAI deployments even as new models and iterations regularly arrive.

    Fortunately, there is an open ecosystem of partners to help mitigate the challenges. Open ecosystems are critical because they help lower the barrier to entry for most mature technology teams.

    In an open ecosystem, organizations lacking the technical chops or financial means to build or pay for LLMs can now access out-of-the-box models that don’t require the precious skills to train, tune or augment models. Trusted partners are one of the keys to navigating that ecosystem.

    Dell is working with partners such as Meta, Hugging Face and others to help you bring AI to your data with high-performing servers, storage, client devices and professional services you can trust.

    Keeping your options open is critical for delivering the business outcomes that will make your GenAI journey magical.

    Learn more about Dell Generative AI Solutions.

    Brought to you by Dell Technologies.

    Предыдущая статьяВ новой серии готовых ПК Maingear Zero спрятаны все кабели материнской платы.
    Следующая статьяFrench Counter-Strike team re-sign star French player, celebrate by
    Виктор Попанов
    Эксперт тестовой лаборатории. Первый джойстик держал в руках в возрасте 3 лет. Первый компьютер, на котором „работал” был с процессором Intel i386DX-266. Тестирует оборудование для издания ITBusiness. Будь то анализ новейших гаджетов или устранение сложных неполадок, этот автор всегда готов к выполнению поставленной задачи. Его страсть к технологиям и приверженность качеству делают его бесценным помощником в любой команде.