-
ADS
-
ADS
-
ADS
-
ADS
-
Technology
Our website News- October 2024
-
As AI reshapes the digital landscape, tech companies find themselves in a high-stakes game of regulatory chess, with each move potentially changing the possibilities stemming from innovation. The game board is especially intricate for global infrastructure providers like Cloudflare, involving as it does cybersecurity, data privacy, and content moderation in a complex policy framework.
“No one wants to miss the boat,” says Alissa Starzak, the company’s deputy chief legal officer and global head of public policy, referring to the rush to regulate AI. Yet, she cautions the tension between urgent action and measured response that encapsulates the complex balancing act Cloudflare navigates daily.
In a recent interview with Artificial Intelligence News, Starzak revealed how the internet infrastructure giant is working to shape a regulatory framework that fosters innovation while safeguarding against emerging cyber threats.
The AI regulatory conundrum: Speed vs. caution
Regulators worldwide face the question of how to mandate as AI technology advances. Urgency is muted by a significant fact: the main dimensions of potential AI are not fully understood yet. “No one really knows yet,” Starzak said, highlighting the challenge of crafting regulations for a technology with unknown scope.
A lack of knowledge has led to producing responsible AI development and deployment frameworks that are speculative. An example would be the AI risk framework set by the National Institute of Standards and Technology (NIST), which Starzak said were meaningful steps towards the goal. Voluntary guidelines provide companies with a roadmap for creating AI risk assessment measures and encourage them to do so without stifling innovation.
The tightrope of global regulatory harmonisation
Cloudflare is congnisant of the complexities of achieving regulatory harmony across different jurisdictions, particularly in data protection and privacy. Starzak used the EU’s General Data Protection Regulation (GDPR) to illustrate the benefits and challenges of sweeping regulatory frameworks.
It is noteworthy that GDPR has a significant role in consolidating privacy norms internationally. Starzak said that its real-life application does not always harmonise with the functioning of the internet. “It doesn’t actually feel like the way the internet necessarily works in practice,” she said, referring to restrictions on data transfers between jurisdictions.
This disconnect highlights a broader challenge: crafting regulations that protect consumers and national interests without impeding the global nature of the internet and digital commerce. Starzak emphasised the need for regulatory mechanisms that are “consistent across jurisdiction to jurisdiction, but enable information to travel.”
The imperative of targeted, narrow actions
Starzak advocates for a more nuanced, targeted approach to cybersecurity measures and content moderation. Her philosophy is rooted in recognising that broad, sweeping actions often have unintended consequences that can harm the ecosystem they aim to protect.
In terms of cybersecurity, Starzak stressed the importance of proportionality. She drew a stark contrast between targeted actions, like removing a specific piece of content, and drastic measures, like complete internet shutdowns. “The narrower that you can go, the better off you’re going to be from an open internet standpoint,” she said.
The principle extends to content moderation as well. As Starzak describes, the approach by Cloudflare involves carefully distinguishing between different types of services and their impacts. By doing so, the company aims to make more precise, effective decisions that address specific issues without unnecessarily compromising the broader internet ecosystem.
Balancing innovation and regulation in AI
The rapid advancement of AI technology presents a unique regulatory challenge. Starzak highlighted the risk of over-regulation stifling innovation and concentrating power in the hands of a few large players. “If you regulate it too much, you restrict the industry in a very significant way and make it really only available to a very small number of players,” she said.
Starzak advocates a regulatory approach that encourages responsible innovation while addressing potential harms. This includes promoting the development and adoption of AI risk assessment frameworks and encouraging industry self-regulation through model testing and ‘red teaming.’
The path forward: collaboration and flexibility
Starzak emphasises the need for ongoing dialogue and flexibility in regulatory approaches to AI and cybersecurity. She highlighted the importance of industry, government, and civil society collaboration to develop effective, balanced regulations.
According to Starzak, the key is to focus on specific harms and consumer protection rather than broad, sweeping regulations. “You have to go in with a purpose,” she stated, urging regulators to understand and articulate the problems they’re trying to solve.
A targeted approach, combined with willingness to adapt as technologies evolve offers a path forward through the complex internet and AI regulation world. As Cloudflare continues to navigate this landscape, Starzak’s insights provide a roadmap for balancing innovation, security, and responsible governance.
As the tech industry and regulators grapple with the challenge of creating effective governance frameworks, Cloudflare’s approach emphasises targeted actions, global harmonisation efforts, and regulatory flexibility. It represents a thoughtful perspective in the dialogue between tech companies and policymakers.
The way forward likely involves collaborative efforts from various stakeholders, including industry leaders, government bodies, and civil society organisations. The focus remains on striking a balance between protecting users and fostering innovation. This goal requires ongoing adaptation and cooperation across the tech ecosystem.
See also: Balancing innovation and trust: Experts assess the EU’s AI Act
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Regulations to help or hinder: Cloudflare’s take appeared first on AI News.
Source Link
- Read more...
-
- 0 comments
- 19 views
-
Happy Eco News Heat Storing Firebricks Make Clean Energy Cheaper
Reading Time: 3 minutes Heat storing firebricks could reduce clean energy transition costs by over a trillion dollars.
A new study published in PNAS Nexus suggests that using special heat storing firebricks could significantly reduce the costs and challenges of switching to 100% clean energy worldwide. The research, led by Mark Z. Jacobson of Stanford University, looked at how these bricks could impact 149 countries.
These special bricks, called heat storing firebricks, can withstand extreme heat. They can be heated for energy storage using extra electricity from renewable sources like wind and solar. Later, they can release that stored heat for use in factories and industrial processes. This approach could help solve one of the biggest problems with renewable energy: its variability.
The study compared two scenarios for transitioning to 100% clean energy by 2050: one without these heat storing firebricks and one that uses them. Using computer simulations, the researchers found that using firebricks could lower overall energy costs while making the power grid more reliable.
The researchers discovered several benefits of using heat storing firebricks. They found that fewer batteries would be needed to store electricity, less hydrogen would need to be produced for backup power, and less underground heat storage would be required. The amount of wind turbines and solar panels needed also decreased slightly. Overall, using heat storing firebricks reduced the cost of switching to clean energy by $1.27 trillion and lowered yearly energy costs by $119 billion across all countries studied. The cost of energy per unit also dropped by about 2%.
The study found that the heat storing firebricks were used efficiently in most regions, being charged and discharged regularly. In contrast, batteries were mainly used for short bursts of power rather than long-term energy storage.
Beyond saving money, using heat storing firebricks also reduced the amount of land needed for clean energy equipment by 2,700 square kilometers. However, it did result in about 118,000 fewer jobs compared to the scenario without heat storing firebricks, due to less need for other types of energy storage and power generation.
The researchers noted that these heat storing firebrick systems are already available for purchase and could potentially be used for up to 90% of industrial heating needs. They can store heat at temperatures up to 1,800°C (3,272°F), making them suitable for a wide range of industrial uses.
While the results are promising, the researchers acknowledge some uncertainties. They tested what would happen if the bricks lost more heat over time and found that even with higher heat loss, the system still saved money compared to not using firebricks.
The study suggests that using these heat storing firebricks could be a valuable tool in making the switch to 100% clean energy easier and cheaper across all sectors. However, the authors note that incentives and new policies may be needed to encourage the widespread use of firebricks in factories, as existing heating systems are deeply entrenched.
This research adds to a growing body of work examining how to achieve 100% renewable energy. The authors argue that a quick transition – with 80% of all energy from clean sources by 2030 and 100% by 2035-2050 – may be necessary to avoid severe climate change and prevent millions of deaths from air pollution each year.
The researchers also compared firebricks to other ways of storing heat energy, such as using molten salt or materials that change from solid to liquid when heated. They found that firebricks have some advantages, like lower cost and the ability to reach higher temperatures. However, other methods might work better in certain situations.
Scientists are also working on improving firebricks and developing new materials for storing heat. Some researchers are exploring ways to make firebricks that can store even more heat or lose less heat over time. Others are looking at completely new materials, like special ceramics or metal alloys, that might store heat even more efficiently than current firebricks.
As countries and industries look for ways to reduce their carbon emissions, technologies like firebrick storage could play a crucial role in balancing the supply of renewable energy with the demand for industrial heat. The study’s findings suggest that more research and development in this area could bring significant benefits for both fighting climate change and making clean energy transitions more affordable.
The post Heat Storing Firebricks Make Clean Energy Cheaper appeared first on Happy Eco News.
Source Link
- Read more...
-
- 0 comments
- 13 views
- September 2024
-
It’s sometimes difficult to distinguish the reality of technology from the hype and marketing messages that bombard our inboxes daily. In just the last five years, we’ve probably heard too much about the metaverse, blockchain and virtual reality, for example. At present, we’re in the midst of a furore about the much-abused term ‘AI’, and time will tell whether this particular storm will be seen as a teacup resident.
Artificial Intelligence News spoke exclusively to Jon McLoone, the Director of Technical Communication and Strategy at of one the most mature organisations in the computational intelligence and scientific innovation space, Wolfram Research, to help us put our present concepts of AI and their practical uses into a deeper context.
Jon has worked at Wolfram Research for 32 years in various roles, currently leading the European Technical Services team. A mathematician by training and a skilled practitioner in many aspects of data analysis, we began our interview by having him describe Wolfram’s work in an elevator pitch format.
Jon McLoone “Our value proposition is that we know computation and Wolfram technology. We tailor our technology to the problem that an organisation has. That’s across a broad range of things. So, we don’t have a typical customer. What they have in common is they’re doing something innovative.”
“We’re doing problem-solving, the type of things that use computation and data science. We’re building out a unified platform for computation, and when we talk about computation, we mean the kinds of technical computing, like engineering calculations, data science and machine learning. It’s things like social network analysis, biosciences, actuarial science, and financial computations. Abstractly, these are all fundamentally mathematical things.”
“Our world is all those structured areas where we’ve spent 30 years building out different ontologies. We have a symbolic representation of the maths, but also things like graphs and networks, documents, videos, images, audio, time series, entities in the real world, like cities, rivers, and mountains. My team is doing the fun stuff of actually making it do something useful!”
“AI we just see as another kind of computation. There were different algorithms that have been developed over years, some of them hundreds of years ago, some of them only tens of years ago. Gen AI just adds to this list.”
Claims made about AI in 2024 can sometimes be overoptimistic, so we need to be realistic about its capabilities and consider what it excels at and where it falls short.
“There’s still human intelligence, which still remains as the strategic element. You’re not going to say, in the next five years AI will run my company and make decisions. Generative AI is very fluent but is unreliable. Its job is to be plausible, not to be correct. And particularly when you get into the kinds of things Wolfram does, it’s terrible because it will tell you the kinds of things that your mathematical answer would look like.” (Artificial Intelligence News‘ italics.)
The work of Wolfram Research in this context focuses on what Jon terms ‘symbolic AI’. To differentiate generative and symbolic AI, he gave us the analogy of modelling the trajectory of a thrown ball. A generative AI would learn how the ball travels by examining many thousands of such throws and then be able to produce a description of the trajectory. “That description would be plausible. That kind of model is data-rich, understanding poor.”
A symbolic representation of the thrown ball, on the other hand, would involve differential equations for projectile motion and representations of elements: mass, viscosity of the atmosphere, friction, and many other factors. “It could then be asked, ‘What happens if I throw the ball on Mars?’ It’ll say something accurate. It’s not going to fail.”
The ideal way to solve business (or scientific, medical, or engineering) problems is a combination of human intelligence, symbolic reasoning, as epitomised in Wolfram Language, and what we now term AI acting as the glue between them. AI is a great technology for interpreting meaning and acting as an interface between the component parts.
“Some of the interesting crossovers are where we take natural language and turn that into some structured information that you can then compute with. Human language is very messy and ambiguous, and generative AI is very good at mapping that to some structure. Once you’re in a structured world of something that is syntactically formal, then you can do things on it.”
A recent example of combining ‘traditional’ AI with the work of Wolfram involved medical records:
“We did a project recently taking medical reports, which were handwritten, typed and digital. But they contain words, and trying to do statistics on those isn’t possible. And so, you’ve got to use the generative AI part for mapping all of these words to things like classes: was this an avoidable death? Yes. No. That’s a nice, structured key value pair. And then once we’ve got that information in structured form (for example a piece of JSON or XML, or whatever your chosen structure), we can then do classical statistics to start saying, ‘Is there a trend? Can we project? Was there an impact from COVID on hospital harms?’ Clear-cut questions that you can approach symbolically with things like means and medians and models.”
During our interview, Jon also gave a précis of a presentation, which took as its example of his organisation’s work, an imaginary peanut butter cup manufacturing plant. What might be the effects of changing out a particular ingredient or altering some detail of the recipe and the effects of that change on the product’s shelf life?
“LLMs (large language models) will say, ‘Oh, they’ll probably last a few weeks because peanut butter cups usually sit on the shelf a few weeks. But going to a computational model that can plug into the ingredients, and compute, and you’ll know this thing should last for eight weeks before it goes off. Or what that change might do to the manufacturing process? A computational model can connect to the digital twin of your manufacturing plant and learn, ‘That will slow things down by 3%, so your productivity will fall by 20% because it creates a bottleneck here.’ LLMs are great at connecting you and your question to the model, maths, data science or the database. And that’s really an interesting three-way meeting of minds.”
You can catch Wolfram Research at the upcoming TechEx event in Amsterdam, October 1-2, at stand 166 of the AI & Big Data strand. We can’t guarantee any peanut butter-related discussion at the event, but to discover how powerful modelling and generative AI can be harnessed to solve your specific problems and quandaries, contact the company via its website.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post How cold hard data science harnesses AI with Wolfram Research appeared first on AI News.
Source Link
- Read more...
-
- 0 comments
- 13 views
-
Happy Eco News How the Royal Mint is Creating Gold Coins from E-Waste
Reading Time: 2 minutes How the Royal Mint is creating gold coins from e-waste
The Royal Mint, the maker of the United Kingdom’s coins, has recently signed an agreement with Excir (a Canadian cleantech start-up) to introduce a world-first technology in the UK. Excir will retrieve and recycle gold and other precious metals from electronic waste, using them to create commemorative coins.
The Royal Mint has opened a large industrial plant in Llantrisant, Wales, where metals will be removed from e-waste. This project aims to reduce the company’s reliance on traditional mining and encourage more sustainable practices.
The extraction process begins when the circuit boards from laptops, phones, and televisions are separated into their various components. The array of detached coils, capacitors, pins, and transistors is sieved, sorted, sliced, and diced as it moves along a conveyor belt. Anything that contains gold is set aside.
The gold pieces are tipped into a chemical solution at an on-site chemical plant, which leaches the gold out into the liquid. The liquid is filtered, and what is left is a power of pure gold. Traditional gold recovery processes can be very energy-intensive and use harmful chemicals. This new method can be done at room temperature, at very low energy, and the gold can be extracted fairly quickly. Moreover, over 99% of the gold found in electronic waste is recovered using this method.
The Royal Mint says it has the capacity to process 4,000 tons of printed circuit boards a year. Initially, the recovered gold will be used to make jewellery, but it will eventually be used to make the commemorative coins. The Mint is also looking to create products with other metals, including aluminium, copper, tin, and steel. Furthermore, it is looking to see if the ground-up circuit boards could be used in the construction industry.
Recovering valuable materials from waste and obsolete products found in rare environments such as electronics, buildings, vehicles, and infrastructure is a process called urban mining. It is becoming increasingly important as we work towards reducing the need for new raw materials. Urban mining lowers the environmental impact associated with traditional mining, including habitat destruction, water use, and greenhouse gas emissions.
Urban mining is a key component of the circular economy, where products are designed to be reused, recycled, or repurposed, minimizing waste and maximizing resource efficiency.
And as it turns out, there is no shortage of e-waste in the UK. Over 62 million tonnes of e-waste was produced worldwide in 2022, and this number is expected to rise to 82 million tonnes by 2030. An e-waste report by the United Nations places the UK as the second biggest producer of e-waste per capita, the first being Norway.
The UK was known for shipping its waste overseas, but now it is finding uses for these elements. Finding a new purpose for e-waste in the UK comes at a good time, as the Royal Mint has moved away from making actual coins (due to the societal shift away from paying with cash). Reclaiming e-waste is an opportunity to create new jobs and foster new skills for the people of the UK.
This new partnership between the Royal Mint and Excir is important in how we move towards recovering and repurposing items and reducing waste. These efforts will help the Royal Mint, which has been around for over 1000 years, secure a future as a leader in sustainably sourced precious metals and the circular economy of the UK. It is hoped that other countries will follow suit and realize that e-waste is quite valuable economically and environmentally and that they should take action now.
The post How the Royal Mint is Creating Gold Coins from E-Waste appeared first on Happy Eco News.
Source Link
- Read more...
-
- 0 comments
- 11 views
-
As data management grows more complex and modern applications extend the capabilities of traditional approaches, AI is revolutionising application scaling.
Han Heloir, EMEA gen AI senior solutions architect, MongoDB. In addition to freeing operators from outdated, inefficient methods that require careful supervision and extra resources, AI enables real-time, adaptive optimisation of application scaling. Ultimately, these benefits combine to enhance efficiency and reduce costs for targeted applications.
With its predictive capabilities, AI ensures that applications scale efficiently, improving performance and resource allocation—marking a major advance over conventional methods.
Ahead of AI & Big Data Expo Europe, Han Heloir, EMEA gen AI senior solutions architect at MongoDB, discusses the future of AI-powered applications and the role of scalable databases in supporting generative AI and enhancing business processes.
AI News: As AI-powered applications continue to grow in complexity and scale, what do you see as the most significant trends shaping the future of database technology?
Heloir: While enterprises are keen to leverage the transformational power of generative AI technologies, the reality is that building a robust, scalable technology foundation involves more than just choosing the right technologies. It’s about creating systems that can grow and adapt to the evolving demands of generative AI, demands that are changing quickly, some of which traditional IT infrastructure may not be able to support. That is the uncomfortable truth about the current situation.
Today’s IT architectures are being overwhelmed by unprecedented data volumes generated from increasingly interconnected data sets. Traditional systems, designed for less intensive data exchanges, are currently unable to handle the massive, continuous data streams required for real-time AI responsiveness. They are also unprepared to manage the variety of data being generated.
The generative AI ecosystem often comprises a complex set of technologies. Each layer of technology—from data sourcing to model deployment—increases functional depth and operational costs. Simplifying these technology stacks isn’t just about improving operational efficiency; it’s also a financial necessity.
AI News: What are some key considerations for businesses when selecting a scalable database for AI-powered applications, especially those involving generative AI?
Heloir: Businesses should prioritise flexibility, performance and future scalability. Here are a few key reasons:
The variety and volume of data will continue to grow, requiring the database to handle diverse data types—structured, unstructured, and semi-structured—at scale. Selecting a database that can manage such variety without complex ETL processes is important. AI models often need access to real-time data for training and inference, so the database must offer low latency to enable real-time decision-making and responsiveness. As AI models grow and data volumes expand, databases must scale horizontally, to allow organisations to add capacity without significant downtime or performance degradation. Seamless integration with data science and machine learning tools is crucial, and native support for AI workflows—such as managing model data, training sets and inference data—can enhance operational efficiency. AI News: What are the common challenges organisations face when integrating AI into their operations, and how can scalable databases help address these issues?
Heloir: There are a variety of challenges that organisations can run into when adopting AI. These include the massive amounts of data from a wide variety of sources that are required to build AI applications. Scaling these initiatives can also put strain on the existing IT infrastructure and once the models are built, they require continuous iteration and improvement.
To make this easier, a database that scales can help simplify the management, storage and retrieval of diverse datasets. It offers elasticity, allowing businesses to handle fluctuating demands while sustaining performance and efficiency. Additionally, they accelerate time-to-market for AI-driven innovations by enabling rapid data ingestion and retrieval, facilitating faster experimentation.
AI News: Could you provide examples of how collaborations between database providers and AI-focused companies have driven innovation in AI solutions?
Heloir: Many businesses struggle to build generative AI applications because the technology evolves so quickly. Limited expertise and the increased complexity of integrating diverse components further complicate the process, slowing innovation and hindering the development of AI-driven solutions.
One way we address these challenges is through our MongoDB AI Applications Program (MAAP), which provides customers with resources to assist them in putting AI applications into production. This includes reference architectures and an end-to-end technology stack that integrates with leading technology providers, professional services and a unified support system.
MAAP categorises customers into four groups, ranging from those seeking advice and prototyping to those developing mission-critical AI applications and overcoming technical challenges. MongoDB’s MAAP enables faster, seamless development of generative AI applications, fostering creativity and reducing complexity.
AI News: How does MongoDB approach the challenges of supporting AI-powered applications, particularly in industries that are rapidly adopting AI?
Heloir: Ensuring you have the underlying infrastructure to build what you need is always one of the biggest challenges organisations face.
To build AI-powered applications, the underlying database must be capable of running queries against rich, flexible data structures. With AI, data structures can become very complex. This is one of the biggest challenges organisations face when building AI-powered applications, and it’s precisely what MongoDB is designed to handle. We unify source data, metadata, operational data, vector data and generated data—all in one platform.
AI News: What future developments in database technology do you anticipate, and how is MongoDB preparing to support the next generation of AI applications?
Heloir: Our key values are the same today as they were when MongoDB initially launched: we want to make developers’ lives easier and help them drive business ROI. This remains unchanged in the age of artificial intelligence. We will continue to listen to our customers, assist them in overcoming their biggest difficulties, and ensure that MongoDB has the features they require to develop the next [generation of] great applications.
(Photo by Caspar Camille Rubin)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Han Heloir, MongoDB: The role of scalable databases in AI-powered apps appeared first on AI News.
Source Link
- Read more...
-
- 0 comments
- 16 views
-
Ahead of AI & Big Data Expo Europe, AI News caught up with Ivo Everts, Senior Solutions Architect at Databricks, to discuss several key developments set to shape the future of open-source AI and data governance.
One of Databricks’ notable achievements is the DBRX model, which set a new standard for open large language models (LLMs).
“Upon release, DBRX outperformed all other leading open models on standard benchmarks and has up to 2x faster inference than models like Llama2-70B,” Everts explains. “It was trained more efficiently due to a variety of technological advances.
“From a quality standpoint, we believe that DBRX is one of the best open-source models out there and when we refer to ‘best’ this means a wide range of industry benchmarks, including language understanding (MMLU), Programming (HumanEval), and Math (GSM8K).”
The open-source AI model aims to “democratise the training of custom LLMs beyond a small handful of model providers and show organisations that they can train world-class LLMs on their data in a cost-effective way.”
In line with their commitment to open ecosystems, Databricks has also open-sourced Unity Catalog.
“Open-sourcing Unity Catalog enhances its adoption across cloud platforms (e.g., AWS, Azure) and on-premise infrastructures,” Everts notes. “This flexibility allows organisations to uniformly apply data governance policies regardless of where the data is stored or processed.”
Unity Catalog addresses the challenges of data sprawl and inconsistent access controls through various features:
Centralised data access management: “Unity Catalog centralises the governance of data assets, allowing organisations to manage access controls in a unified manner,” Everts states. Role-Based Access Control (RBAC): According to Everts, Unity Catalog “implements Role-Based Access Control (RBAC), allowing organisations to assign roles and permissions based on user profiles.” Data lineage and auditing: This feature “helps organisations monitor data usage and dependencies, making it easier to identify and eliminate redundant or outdated data,” Everts explains. He adds that it also “logs all data access and changes, providing a detailed audit trail to ensure compliance with data security policies.” Cross-cloud and hybrid support: Everts points out that Unity Catalog “is designed to manage data governance in multi-cloud and hybrid environments” and “ensures that data is governed uniformly, regardless of where it resides.” The company has introduced Databricks AI/BI, a new business intelligence product that leverages generative AI to enhance data exploration and visualisation. Everts believes that “a truly intelligent BI solution needs to understand the unique semantics and nuances of a business to effectively answer questions for business users.”
The AI/BI system includes two key components:
Dashboards: Everts describes this as “an AI-powered, low-code interface for creating and distributing fast, interactive dashboards.” These include “standard BI features like visualisations, cross-filtering, and periodic reports without needing additional management services.” Genie: Everts explains this as “a conversational interface for addressing ad-hoc and follow-up questions through natural language.” He adds that it “learns from underlying data to generate adaptive visualisations and suggestions in response to user queries, improving over time through feedback and offering tools for analysts to refine its outputs.” Everts states that Databricks AI/BI is designed to provide “a deep understanding of your data’s semantics, enabling self-service data analysis for everyone in an organisation.” He notes it’s powered by “a compound AI system that continuously learns from usage across an organisation’s entire data stack, including ETL pipelines, lineage, and other queries.”
Databricks also unveiled Mosaic AI, which Everts describes as “a comprehensive platform for building, deploying, and managing machine learning and generative AI applications, integrating enterprise data for enhanced performance and governance.”
Mosaic AI offers several key components, which Everts outlines:
Unified tooling: Provides “tools for building, deploying, evaluating, and governing AI and ML solutions, supporting predictive models and generative AI applications.” Generative AI patterns: “Supports prompt engineering, retrieval augmented generation (RAG), fine-tuning, and pre-training, offering flexibility as business needs evolve.” Centralised model management: “Model Serving allows for centralised deployment, governance, and querying of AI models, including custom ML models and foundation models.” Monitoring and governance: “Lakehouse Monitoring and Unity Catalog ensure comprehensive monitoring, governance, and lineage tracking across the AI lifecycle.” Cost-effective custom LLMs: “Enables training and serving custom large language models at significantly lower costs, tailored to specific organisational domains.” Everts highlights that Mosaic AI’s approach to fine-tuning and customising foundation models includes unique features like “fast startup times” by “utilising in-cluster base model caching,” “live prompt evaluation” where users can “track how the model’s responses change throughout the training process,” and support for “custom pre-trained checkpoints.”
At the heart of these innovations lies the Data Intelligence Platform, which Everts says “transforms data management by using AI models to gain deep insights into the semantics of enterprise data.” The platform combines features of data lakes and data warehouses, utilises Delta Lake technology for real-time data processing, and incorporates Delta Sharing for secure data exchange across organisational boundaries.
Everts explains that the Data Intelligence Platform plays a crucial role in supporting new AI and data-sharing initiatives by providing:
A unified data and AI platform that “combines the features of data lakes and data warehouses into a single architecture.” Delta Lake for real-time data processing, ensuring “reliable data governance, ACID transactions, and real-time data processing.” Collaboration and data sharing via Delta Sharing, enabling “secure and open data sharing across organisational boundaries.” Integrated support for machine learning and AI model development with popular libraries like MLflow, PyTorch, and TensorFlow. Scalability and performance through its cloud-native architecture and the Photon engine, “an optimised query execution engine.” As a key sponsor of AI & Big Data Expo Europe, Databricks plans to showcase their open-source AI and data governance solutions during the event.
“At our stand, we will also showcase how to create and deploy – with Lakehouse apps – a custom GenAI app from scratch using open-source models from Hugging Face and data from Unity Catalog,” says Everts.
“With our GenAI app you can generate your own cartoon picture, all running on the Data Intelligence Platform.”
Databricks will be sharing more of their expertise at this year’s AI & Big Data Expo Europe. Swing by Databricks’ booth at stand #280 to hear more about open AI and improving data governance.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Ivo Everts, Databricks: Enhancing open-source AI and improving data governance appeared first on AI News.
Source Link
- Read more...
-
- 0 comments
- 15 views
-
BMC Software’s director of solutions marketing, Basil Faruqui, discusses the importance of DataOps, data orchestration, and the role of AI in optimising complex workflow automation for business success.
What have been the latest developments at BMC?
It’s exciting times at BMC and particularly our Control-M product line, as we are continuing to help some of the largest companies around the world in automating and orchestrating business outcomes that are dependent on complex workflows. A big focus of our strategy has been on DataOps specifically on orchestration within the DataOps practice. During the last twelve months we have delivered over seventy integrations to serverless and PaaS offerings across AWS, Azure and GCP enabling our customers to rapidly bring modern cloud services into their Control-M orchestration patterns. Plus, we are prototyping GenAI based use cases to accelerate workflow development and run-time optimisation.
What are the latest trends you’ve noticed developing in DataOps?
What we are seeing in the Data world in general is continued investment in data and analytics software. Analysts estimate that the spend on Data and Analytics software last year was in the $100 billion plus range. If we look at the Machine Learning, Artificial Intelligence & Data Landscape that Matt Turck at Firstmark publishes every year, its more crowded than ever before. It has 2,011 logos and over five hundred were added since 2023. Given this rapid growth of tools and investment, DataOps is now taking center stage as companies are realising that to successfully operationalise data initiatives, they can no longer just add more engineers. DataOps practices are now becoming the blueprint for scaling these initiatives in production. The recent boom of GenAI is going make this operational model even more important.
What should companies be mindful of when trying to create a data strategy?
As I mentioned earlier that the investment in data initiatives from business executives, CEOs, CMOs, CFOs etc. continues to be strong. This investment is not just for creating incremental efficiencies but for game changing, transformational business outcomes as well. This means that three things become very important. First is clear alignment of the data strategy with the business goals, making sure the technology teams are working on what matters the most to the business. Second, is data quality and accessibility, the quality of the data is critical. Poor data quality will lead to inaccurate insights. Equally important is ensuring data accessibility – making the right data available to the right people at the right time. Democratising data access, while maintaining appropriate controls, empowers teams across the organisation to make data-driven decisions. Third is achieving scale in production. The strategy must ensure that Ops readiness is baked into the data engineering practices so its not something that gets considered after piloting only.
How important is data orchestration as part of a company’s overall strategy?
Data Orchestration is arguably the most important pillar of DataOps. Most organisations have data spread across multiple systems – cloud, on-premises, legacy databases, and third-party applications. The ability to integrate and orchestrate these disparate data sources into a unified system is critical. Proper data orchestration ensures seamless data flow between systems, minimising duplication, latency, and bottlenecks, while supporting timely decision-making.
What do your customers tell you are their biggest difficulties when it comes to data orchestration?
Organisations continue to face the challenge of delivering data products fast and then scaling quickly in production. GenAI is a good example of this. CEOs and boards around the world are asking for quick results as they sense that this could majorly disrupt those who cannot harness its power. GenAI is mainstreaming practices such as prompt engineering, prompt chaining etc. The challenge is how do we take LLMs and vector databases, bots etc and fit them into the larger data pipeline which traverses a very hybrid architecture from multiple-clouds to on-prem including mainframes for many. This just reiterates the need for a strategic approach to orchestration which would allow folding new technologies and practices for scalable automation of data pipelines. One customer described Control-M as a power strip of orchestration where they can plug in new technologies and patterns as they emerge without having to rewire every time they swap older technologies for newer ones.
What are your top tips for ensuring optimum data orchestration?
There can be a number of top tips but I will focus on one, interoperability between application and data workflows which I believe is critical for achieving scale and speed in production. Orchestrating data pipelines is important, but it is vital to keep in mind that these pipelines are part of a larger ecosystem in the enterprise. Let’s consider an ML pipeline is deployed to predict the customers that are likely to switch to a competitor. The data that comes into such a pipeline is a result of workflows that ran in the ERP/CRM and combination of other applications. Successful completion of the application workflows is often a pre-requisite to triggering the data workflows. Once the model identifies customers that are likely to switch, the next step perhaps is to send them a promotional offer which means that we will need to go back to the application layer in the ERP and CRM. Control-M is uniquely positioned to solve this challenge as our customers use it to orchestrate and manage intricate dependencies between the application and the data layer.
What do you see as being the main opportunities and challenges when deploying AI?
AI and specifically GenAI is rapidly increasing the technologies involved in the data ecosystem. Lots of new models, vector databases and new automation patterns around prompt chaining etc. This challenge is not new to the data world, but the pace of change is picking up. From an orchestration perspective we see tremendous opportunities with our customers because we offer a highly adaptable platform for orchestration where they can fold these tools and patterns into their existing workflows versus going back to drawing board.
Do you have any case studies you could share with us of companies successfully utilising AI?
Domino’s Pizza leverages Control-M for orchestrating its vast and complex data pipelines. With over 20,000 stores globally, Domino’s manages more than 3,000 data pipelines that funnel data from diverse sources such as internal supply chain systems, sales data, and third-party integrations. This data from applications needs to go through complex transformation patterns and models before its available for driving decisions related to food quality, customer satisfaction, and operational efficiency across its franchise network.
Control-M plays a crucial role in orchestrating these data workflows, ensuring seamless integration across a wide range of technologies like MicroStrategy, AMQ, Apache Kafka, Confluent, GreenPlum, Couchbase, Talend, SQL Server, and Power BI, to name a few.
Beyond just connecting complex orchestration patterns together Control-M provides them with end-to-end visibility of pipelines, ensuring that they meet strict service-level agreements (SLAs) while handling increasing data volumes. Control-M is helping them generate critical reports faster, deliver insights to franchisees, and scale the roll out new business services.
What can we expect from BMC in the year ahead?
Our strategy for Control-M at BMC will stay focused on a couple of basic principles:
Continue to allow our customers to use Control-M as a single point of control for orchestration as they onboard modern technologies, particularly on the public cloud. This means we will continue to provide new integrations to all major public cloud providers to ensure they can use Control-M to orchestrate workflows across three major cloud infrastructure models of IaaS, Containers and PaaS (Serverless Cloud Services). We plan to continue our strong focus on serverless, and you will see more out-of-the-box integrations from Control-M to support the PaaS model.
We recognise that enterprise orchestration is a team sport, which involves coordination across engineering, operations and business users. And, with this in mind, we plan to bring a user experience and interface that is persona based so that collaboration is frictionless.
Specifically, within DataOps we are looking at the intersection of orchestration and data quality with a specific focus on making data quality a first-class citizen within application and data workflows. Stay tuned for more on this front!
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Basil Faruqui, BMC Software: How to nail your data and AI strategy appeared first on AI News.
Source Link
- Read more...
-
- 0 comments
- 11 views
-
Healthcare documentation is an integral part of the sector that ensures the delivery of high-quality care and maintains the continuity of patient information. However, as healthcare providers have to deal with excessive amounts of data, managing it can feel overwhelming. With the advent of intelligent document processing technology, a new solution can now be implemented. This article explores how such technology works, its role in healthcare documentation, and its benefits, limitations, and implications for the future.
Intelligent document processing and its importance
Intelligent document processing is a more advanced type of automation based on AI technology, machine learning, natural language processing, and optical character recognition to collect, process, and organise data from multiple forms of paperwork. Unlike traditional document systems, IDP can handle unstructured and semi-structured data for multiple healthcare documents, which can exist in various forms. As such data is based on advanced, permanent algorithms and artificial intelligence tools, IDP can enhance the functions of healthcare providers and assist them in the care delivery process.
IDP’s role in healthcare documentation
Multiple forms of documents, like health, employment, or insurance records, reports, notes, forms, and social documents, have to be dealt with by multiple providers daily. IDP can reduce the need for inefficient data management processes through:
Automating the data extraction process by automatically capturing the essential information from the documents. Thus, it reduces the human factor and enhance performance, Establishing more accurate data With AI algorithms. IDP ensures that the data captured is accurate and consistent; crucial for patient safety and care quality, Organising data in a searchable format to allow better data access. Ensuring compliance with regulations like HIPAA by securely managing sensitive patient data and providing audit trails. Benefits of IDP in healthcare
The implementation of IDP in healthcare comes with several benefits:
Increased efficiency: By automating routine tasks, healthcare providers can focus more on patient care rather than paperwork, Cost reduction: IDP reduces the need for manual data entry and paper-based processes, leading to significant cost savings, Better patient experience: Quick access to patient history and records leads to more informed decision-making and personalised care, Scalability: As healthcare facilities grow, IDP systems can easily scale to manage increased data volumes without compromising performance. Challenges in implementing IDP
While IDP offers many advantages, there are challenges to its adoption:
Integration with existing systems: Integrating IDP with current healthcare IT ecosystems can be complex and requires careful planning, Data privacy concerns: Protecting patient data is paramount, and IDP must adhere to stringent security standards, Change management: Staff may resist shifting from manual to automated processes, necessitating adequate training and change management strategies. Future of IDP in healthcare
In the future, IDP is likely to increase its impact in the healthcare field. Given the rise of AI and machine learning, the corresponding systems will become increasingly sophisticated, likely providing predictive analytics and decision support services. This could help improve diagnostic precision and create a more personalised patient treatment plan, eventually leading to better outcomes. In addition, IDP may facilitate data exchange between different healthcare systems.
Conclusion
Intelligent document processing is a typical solution that is bound to become increasingly impactful in healthcare. It may help healthcare professionals deal more effectively with the contemporary challenges of patient data. Although challenges exist, the potential results of improved client care, decreased expenses, and more precise data make IDP an invaluable asset. Thus, it can be concluded that Intelligent Document Processing should be considered one of the healthcare industry’s future solutions in its quest toward digitalisation.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Enhancing healthcare documentation with IDP appeared first on AI News.
Source Link
- Read more...
-
- 0 comments
- 13 views
-
Staying competitive in modern sales today effectively means embracing the latest trends in tech.
Since late 2022 – when generative AI made its way to the public’s consciousness thanks to OpenAI’s ChatGPT – AI has been at the forefront of this shift, changing the way sales teams (like most other teams) operate and connect with clients.
In this blog post, let’s dive into how AI is streamlining sales activities and helping boost conversion rates.
Here are the top five ways sales teams can use AI to better personalise interactions, automate admin work, and more, proving that it’s not just about cutting costs but transforming how sales are done.
1. Personalised engagement
A typical sales cycle is complex, involving multiple touchpoints and interactions before conversion. Deeper personalisation involves understanding a prospect’s business needs, challenges, and industry trends. AI tools are particularly adept at sifting through large datasets to uncover insights that tailor interactions to these specific business contexts.
For instance, AI can analyse past interactions, like email exchanges and engagement history, to determine what type of content or product features are most relevant to a specific client. This allows sales teams to offer solutions that are not just generic services or products but are customised to address the client’s unique challenges and goals.
AI can enhance account-based marketing (ABM) strategies by enabling sales teams to create highly personalised content strategies for each account. By analysing data from various touchpoints in the quote to cash process, AI helps in crafting messages that resonate deeply with each decision-maker in the client’s organisation. This targeted approach not only strengthens relationships but also significantly increases the likelihood of closing deals.
2. Sales forecasting
Accurate sales forecasting is vital in B2B sales, where strategic planning and resource allocation depend heavily on predicted sales outcomes. AI significantly enhances the accuracy and reliability of these forecasts by analysing vast amounts of data and identifying trends that are hard to spot.
AI-driven pipeline forecasting tools use historical sales data, market conditions, and real-time sales activity to predict future sales performance. These tools employ predictive analytics to model various scenarios and their potential impacts on sales, helping sales teams to prepare more effectively for future market movements.
Moreover, AI-enhanced forecasting tools can dynamically update predictions based on new data. This means that sales forecasts are not static but evolve as more interaction and transaction data becomes available. Such dynamic forecasting ensures that sales strategies remain agile and responsive to changes, increasing the overall efficiency of sales operations.
By leveraging AI for advanced sales forecasting, B2B companies can not only forecast with greater accuracy but also gain strategic insights that can lead to a more proactive approach in managing sales pipelines and customer relationships.
3. Dynamic pricing
Dynamic pricing is an advanced AI application that can significantly boost B2B sales performance by optimising pricing strategies based on real-time market data and customer behaviour. This technology allows companies to adjust their pricing models swiftly in response to changes in the market or customer demand, ensuring competitiveness and maximising revenue.
AI tools like Competera analyse historical sales data, market dynamics, competitor pricing, and customer patterns to recommend the most effective pricing strategies for various products and services. For instance, it can suggest special discounts for high-value clients or adjust prices during peak demand periods to capitalise on market trends.
AI-driven dynamic pricing can enhance customer satisfaction by offering fair prices that reflect the current value of the products or services, which can differ across customer segments or even individual clients based on their purchase history and loyalty.
By integrating dynamic pricing models powered by AI, sales teams not only streamline their pricing strategies but also ensure that they are adaptable, data-driven, and closely aligned with both market conditions and customer expectations.
For B2B companies aiming to refine their pricing and sales strategies, an AI consulting service is a crucial edge. By engaging advanced data analytics and AI/ML expertise, these services enhance data-driven decision-making, improve customer relationships, and accelerate sales cycles, fostering a more competitive and efficient sales process.
4. Lead scoring and prioritisation
When you have a healthy influx of leads, efficiently managing them is crucial. Sales teams can use AI to dramatically enhance this process through sophisticated lead scoring systems, which assess and rank prospects based on their likelihood to convert. This prioritisation ensures that sales teams focus their efforts on the most promising leads, optimising both time and resources.
AI tools integrate various data points like past interactions, engagement levels, company size, and industry-specific behaviours to create a comprehensive profile of each lead. AI algorithms can examine historical data to recognise patterns that indicate a high probability of conversion. This might include the frequency of communications, the types of questions asked by the prospect, or their engagement with specific content.
For example, Salesforce Einstein uses machine learning to continuously refine its scoring model based on new data, making the lead qualification process more dynamic and accurate. By automating the identification of high-potential leads, sales teams can allocate more time to crafting personalised outreach strategies that are more likely to resonate with top-tier prospects.
Moreover, AI-powered lead scoring can alert sales teams to changes in a lead’s score in real-time. This means that if a prospect’s engagement level increases due to a recent interaction or a change in their business needs, the sales team can immediately capitalise on this opportunity, increasing the chances of a successful sale.
So, by leveraging AI for lead scoring and prioritisation, sales teams can ensure they are not just reaching out to more leads, but are reaching out to the right leads at the right time.
5. Automating administrative tasks
AI’s prowess to automate administrative tasks is a game changer in B2B sales, where efficiency and time management are critical. By taking over routine tasks, AI allows sales teams to dedicate more energy and focus to engaging with clients and closing deals.
For instance, AI-powered CRM tools can handle data entry, manage email sequences, schedule meetings, and update logs with new client information. This automation streamlines the sales process, reducing the administrative burden and minimising the potential for human error.
AI-driven automation extends to crafting and sending follow-up emails. AI can analyse the interaction history with each client to determine the most effective follow-up strategy, tailoring messages based on the client’s previous responses and engagement level. This personalised approach ensures that communications are relevant and timely, thereby increasing the likelihood of maintaining the client’s interest and pushing the sales process forward.
And, AI can offer predictive insights about the best times to contact clients or send out proposals, based on data patterns that include client availability and response rates. This predictive capability ensures that sales efforts are not just systematic but also strategically timed, maximising the impact of each interaction.
By leveraging AI to automate these essential but repetitive tasks, B2B sales teams can significantly improve their productivity and effectiveness, allowing them to focus on what they do best – building relationships and closing sales.
Wrapping up
The integration of AI tools in modern sales processes brings efficiency and effectiveness, allowing sales teams to focus on strategic aspects of sales like relationship building and closing high-value deals. Teams that embrace AI can expect not only increased conversion rates but also more responsive sales ops that can adapt quickly to market changes and customer needs.
All in all, companies that welcome ongoing adaptation and investment in AI tools will be well-positioned to lead in their industries, leveraging AI not just as a tool, but as a core component of their sales strategy.
(Image Source: Freepik)
The post How sales teams can use AI today to optimise conversions appeared first on AI News.
Source Link
- Read more...
-
- 0 comments
- 16 views
-
SS&C Blue Prism’s VP of sales for the UK, Ireland and Benelux, Mark Lockett, discusses the firm’s latest developments, customer challenges and how to get the most out of intelligent automation tools.
Can you tell us a little bit about SS&C Blue Prism and what it does?
SS&C Blue Prism is a specialist in the field of Intelligent Automation, providing products and solutions that change the way in which our customers deliver the work they undertake.
We talk about automation augmenting the workforce, and we can do that by using a digital workforce that brings additional capacity to your human workforce. The rationale being we get a digital worker to do those repetitive, high volume, low value added tasks, and we then allow your employees to focus on the value add that they can bring.
Intelligent Automation is really looking at the whole cycle of how to deliver the required work through the most efficient channel. That could include orchestration using business process management capabilities. It could also look at process identification through Blue Prism Process Intelligence technologies, where we’re trying to identify those tasks that lend themselves to be automated by technology.
The dual effect of automation and orchestration of tasks that customers have to do day in, day out is where SS&C Blue Prism brings most value to its customers. A digital workforce could be aimed at improving an HR onboarding process, improving your finance period end close process or transferring information from an outpatient system to an electronic patient record system and vice versa.The use cases are many and varied but the principle remains the same; use the right channel to deliver the work effort. The beauty of a digital workforce comes in the ability to flex work demands as and when necessary.
What have been the latest developments at the company?
We’ve been putting a lot of our time, effort and resources into our Next Gen platform. That’s our cloud-native platform that provides access to intelligent automation capabilities, delivered in a way that suits our customers best. It helps customers enjoy the benefits of the cloud while keeping the work where it needs to be. With this hybrid deployment, Next Gen allows customers to take advantage of using the cloud, while having a self-hosted digital workforce that operates behind the customer’s firewall, on their own secure infrastructure – meaning no sensitive data leaves their network.
For many customers that operate in highly regulated industries, that really does drive the opportunity for us to enhance the way we can deliver that through the Next Gen platform. And Next Gen also brings together, in a single repository, all the capabilities that allow us to improve the business processes that we’re undertaking on behalf of our customers.
Also, I think we’d have been living under a rock if we hadn’t appreciated the fact that Gen AI is really where the market is pivoting. We’re heavily looking into understanding how we can use that technology to really change the way that we work. We’ve introduced capabilities that allow us to integrate with a variety of large language models so our customers can adopt Gen AI. And the way in which we consider that is by using this concept that Gen AI is the capability, which is effectively the brain that allows you to have the emotional, considered response, and the digital workers are the arms and legs that deliver that work.
So the brain, the Gen AI, does the thinking, and then the digital workforce does the actual doing. When Gen AI is wrapped into Intelligent Automation processes, it means it’s fully auditable and secure. With many customers hesitant to fully dive into using Gen AI due to security concerns, the combination is compelling. That’s something that our customers are really excited about in terms of driving use of Gen AI. And we’re seeing that in a number of places now where we’re looking at Gen AI to manage those customer facing interactions, manage those employee interactions, manage those supplier interactions. They have that ability to respond to any of those queries through a variety of channels, be that telephone, email or chat capability, then Gen AI can pick up and author the response, executed by the automation platform.
I speak to a lot of end users and the main thing they say about AI, because it’s so topical right now, is they think they should be utilising it. The problem for many though, is they don’t know how and why. They’re worried that they’re going to be left behind if they don’t get on board with it but maybe it’s not even suitable for them.
I couldn’t agree more. I think for a lot of our customers, and a lot of customer conversations you have, there is this view that we’ve got to do something. We’ve got to have a budget. And invariably there are budgets around for Gen AI. A lot of that is in pilot phase right now. And if you look at some of the evidence in support of it, they haven’t necessarily gone that well.
Part of the problem is that for many they are actually considering Gen AI without thinking of the business problem that they’re trying to solve. We know we’ve got this new shiny bit of kit and that we should be using it. How to use it and what to do with it is almost a secondary consideration.
The conversation that we really try to move to with the customer is ‘what is the problem that you’re trying to solve? What is the customer issue that you’re trying to solve?’ And we’re certainly seeing that through three main lenses in terms of that use case for Gen AI.
The customer interaction, the employee interaction, or the citizen interaction, if it’s a member of the public. We’re seeing some really interesting things right now about how we are supporting our Gen AI partners, because most of what we are doing is facilitating the use of a third party large language model. We are effectively providing the framework by which our partners can interact with the customer and solve the customer problem.
What kind of trends have you seen developing in Intelligent Automation recently?
There are a number of things that our customers talk to us about. One of the things we’ve already spoken about, and that is this notion of Gen AI. We’ve got to do it. What are we going to do? How are we going to do it? We need to use Gen AI, and we need to automate it. And there are a number of pilot initiatives that we see because of that. There’s been so much hype around the business value of Gen AI that I think it’s quite scary for some.
There was a recent industry report by McKinsey that talked about a $4.4 trillion market opportunity with Gen AI. There are some absolutely unbelievable numbers that are thrown out about that. I think the reality of that is slightly more considered. And I think it’s not just about how we can change the way we work. It’s really about how can I get a better outcome for the stakeholder, whomever that may be, by deploying Gen AI with automation? So that’s one of the first trends.
The second thing that’s really interesting is our customers that have adopted process automation. They’ve used digital workers to either reduce existing costs or improve productivity. So they’ve used it initially as an opportunity for maybe a bit of cost control around improving and automating some processes. But that now is taking them to the next level, which is looking at how to use process intelligence to really identify further process enhancements that they can make. We’re talking about targeting huge organisational objectives through the use of Intelligent Automation, such as growth, customer satisfaction, or employee satisfaction, to name just a few.
I think many companies have taken the low hanging fruit by automation, and now they are investing in those technologies around process identification so they can actually be sure that what they’re automating are the right things and delivering value. But are we? Are we leaving things uncovered by not using the process intelligence in support of the business operation? That is becoming more of a story that our customers are really getting into, and we’ve had a number of deployments where customers have done those initial automation activities, and are now looking to take it to the next level.
The third thing we see more of is this co-existence with Microsoft Power applications. We’re seeing customers adopting those capabilities alongside technologies such as ours, and actually coexisting together in support. We see that more and more, and I think that’s a trend that many customers recognise in terms of the way that they’re working. It’s not just a one size fits all approach. What is the most appropriate technology?
What are your customers biggest challenges? And how can Intelligent Automation help them deal with those?
The number one challenge is cost control. How do we manage in a market of rising prices? How do we make sure that we’re getting value for money from the automation? We continue to advocate and demonstrate the value that automation is bringing. Be really structured in terms of how you are assessing the benefit that the automation is bringing, because you are accounting for that spend, you’ve got to prove that it’s worthwhile.
For example, what’s the impact on FTE savings? What’s the volume of automations that I’m delivering? What’s the average cost of an employee that’s doing that work? Multiply one by the other and that’s my FTE saving that goes into the business case. So actual cost control, but measured in the term of the business efficiency that I get as a consequence of it. But, where the magic happens is being able to demonstrate what those extra hours have enabled you to do. Have you been able to launch better, quicker products? Have you improved employee satisfaction? Cost factors are always important, but customers must look beyond this to make full use of automation.
Many, if not most, of our customers have their own centres of excellence that need to be able to demonstrate a value to the business. So that’s the number one conversation we get with our customers. How do we continue to justify the investment in the technology?
What advice would you give to any companies thinking about implementing Intelligent Automation?
For any customer considering introducing Intelligent Automation, what is the problem that you’re looking to solve? That’s the crux of the matter. And often you find that customers will look to technologies such as ours, where they know they have a challenge with existing technology estate. They’ve got a high degree of technology debt in their IT estate, and one of the ways that they can overcome some of those limitations is by adopting Intelligent Automation.
So think about the problem that you’re trying to solve, and in order to do that, we need to get a really good understanding of what the actual business processes look like. Or, more importantly, what you think those business processes look like, because often what you think they look like and what they actually look like are very different. That’s where things like process intelligence come in to support that. So what is the problem that you’re looking to solve?
The next thing that needs to be considered is how do you plan to support that moving forward? Because where our customers have continual investment in the technology and the development of the solution capability, they need to then start being advocates for automation technologies within the business. And once you are doing that, then you are the ones that are effectively going to other parts of the business and trying to identify those automation use cases.
Our really successful customers are the ones that have got an internal champion who is going out to other parts of the business, because for many areas of the business, this is quite a well kept secret. So helping people understand what this technology can deliver by way of automation and streamlining process, and improvement of process because it’s not that widely understood. We often find that when employees realise what benefits it brings to their team, demand for those internal champions becomes huge.
For some people, this notion of Intelligent Automation with digital workers has got this sort of Metal Mickey robot-type notion, and we’re not talking about that at all. You’re talking about using computers to emulate human interactions and, using Gen AI, they’re then emulating the human interaction that goes with it.
So it becomes really quite powerful, but you’ve got to think about how you’re going to sustain that. What does a centre of excellence look like? What have I got by way of developers that can write the automations? What have I then got, by way of business analysts, that can then help us support and find the automations that we need?
Think about what the initial use cases could look like. A business case on the whole is very easy to write. Where the challenge comes is how do I then sustain and grow the automation footprint? And the customers that are doing it really successfully are either partnering with someone who continues to deliver that function for them, or they’re bringing together their own centre of excellence in house, and they are then tasked with being the champions for further deployment.
What plans does SS&C Blue Prism have for the year ahead?
It’s something we’ve already touched upon. We are absolutely focused on transitioning our customers to the Next Gen capability, and embracing the technology opportunity that comes with that is something that customers have really input into the the development roadmap for the technology, and how we are moving with that technology.
Our customers are really looking at when is the optimum opportunity for them to deploy Next Gen. That’s going to be a focus in the short to medium term. And the benefit that offers to our customers is really exciting, particularly when you’re talking about a global customer, where they have operations in a variety of geographies. And actually by having that central automation capability you can deploy the actual workers within each of the regions. That gives you a real step change in terms of the efficiency of automation and the ease by which you monitor and manage those automations as well.
And then, as others are also encountering, the whole value that Gen AI brings, again, we have got a lot of focus on that. We’ve got a number of customers that are doing some really interesting things. We’ve just been successful with a customer project – a public sector body that is looking at the way they transform the citizen experience – and Gen AI has a huge part to play in that. We see that as something that will continue to improve over time.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Mark Lockett, SS&C Blue Prism: Enhancing human capabilities with digital workforces appeared first on AI News.
Source Link
- Read more...
-
- 0 comments
- 12 views
-
ADS
-
ADS