Undoubtedly, AI will be a critical part of building applications in the future, but developers often struggle to understand how they can incorporate AI-powered features into applications today.
Public cloud providers, such as AWS, Microsoft and Google, have API-based machine learning services that make it easier for developers to add artificial intelligence features into their existing workflows. But, before you jump in, you need to understand how to integrate these AI APIs into an application and consider potential limitations or drawbacks to their use.
AI integration and best practices
The integration process varies depending on the language the existing application is written in, which AI services will be used and where the data is stored. In general, however, the first step is to configure proper permissions for accessing the API and any relevant data using your cloud provider's identity and access management service. The data also must be accessible from both your app and the cloud AI service.
Once permissions and storage are properly configured, calls to a cloud AI service become straightforward in most cases, and developers shouldn't have to make significant changes to existing code. For example, if you build a Python app and want to use one of the AWS AI API services, you can import Amazon's boto3 SDK into your app and make calls directly within your code. If your app's programming language is not supported with an SDK, most cloud AI services are also accessible via standard API calls.
However, developers may encounter some common challenges. For example, slow connections can hurt performance when you rely on the internet to upload images for analysis by a service -- such as Amazon Rekognition -- or translate speech to text in real time. Strip unnecessary components from data before you upload it, and host your application in the same cloud where your AI service is hosted to reduce data transfer issues.
Cloud providers' AI services are also designed to lock users into a certain ecosystem. It's not practical to use multiple vendors' AI APIs in the same application -- nor is it easy to switch from one vendor's AI suite to another, since doing so would require an overhaul of the applications and API interface.
It's also important for enterprises to remember that AI is not perfect. When users rely on AI APIs to convert text to speech or search through images, there is a margin of error. The AI service might not transcribe all of the words correctly, or it might misinterpret some images. These types of mistakes are also common when users rely on manually entered data or metadata. Developers must ensure that their applications can handle situations where data produced or processed by AI is inaccurate or incomplete.
Lastly, just because you can take advantage of AI doesn't mean you should. Various cloud services have made it easy for developers to infuse their applications with AI, but not everyone is enthralled with chatbots or wants to see personalized product recommendations every time they log into an application. Before IT teams add these features, they should talk with their product design team and get a sense for whether AI services will actually enhance their app in a way that matters to end users.
Dig Deeper on Cloud APIs and integration
Related Q&A from Chris Tozzi
Assessing serverless computing offerings from AWS and Microsoft Azure? Evaluate differences in price, language support and deployment models for AWS ... Continue Reading
There's no one-size-fits-all answer in terms of how many nodes should make up a Kubernetes cluster. Instead, that number varies based on specific ... Continue Reading
A local Kubernetes deployment -- enabled by Minikube, MicroK8s or K3s -- is a great option to test code or simply learn the container orchestration ... Continue Reading