Developer Experience: Demand to support engineering teams has risen, and there is a shift from traditional DevOps to workflow improvements.
The future of AI-driven development. Join the discussion around insights on low code's and AI's roles in building mission-critical apps.
An Introduction to Object Mutation in JavaScript
*You* Can Shape Trend Reports: Join DZone's GenAI Research + Enter the Prize Drawing!
Developer Experience
With tech stacks becoming increasingly diverse and AI and automation continuing to take over everyday tasks and manual workflows, the tech industry at large is experiencing a heightened demand to support engineering teams. As a result, the developer experience is changing faster than organizations can consciously maintain.We can no longer rely on DevOps practices or tooling alone — there is even greater power recognized in improving workflows, investing in infrastructure, and advocating for developers' needs. This nuanced approach brings developer experience to the forefront, where devs can begin to regain control over their software systems, teams, and processes.We are happy to introduce DZone's first-ever Developer Experience Trend Report, which assesses where the developer experience stands today, including team productivity, process satisfaction, infrastructure, and platform engineering. Taking all perspectives, technologies, and methodologies into account, we share our research and industry experts' perspectives on what it means to effectively advocate for developers while simultaneously balancing quality and efficiency. Come along with us as we explore this exciting chapter in developer culture.
Getting Started With Agentic AI
Java Application Containerization and Deployment
Advancing in a software engineering career can be a daunting challenge. Many engineers find themselves stuck, unsure of what steps to take to move from a mid-level role to senior positions such as staff, principal, or distinguished engineer. While technical knowledge is essential, the real differentiators are the skills that allow engineers to build scalable, maintainable, and collaborative software solutions. Open source provides an ideal platform for mastering these crucial skills. It forces engineers to write clean, maintainable code, work within distributed teams, document effectively, and apply industry best practices that lead to software longevity. Some of the most successful open-source projects have been maintained for decades, demonstrating principles that can be used in any professional setting. The reasons and methods for participating in open-source projects were explored in a previous article: Why and How to Participate in Open Source Projects. This article will focus on the hard skills gained through open-source contributions and how they can accelerate a software engineering career. Now, let's explore six key categories of skills that open source can help develop, enabling career advancement. 1. Software Architecture Software architecture is the foundation of any successful project. Open source forces engineers to think critically about design choices because the code must be understandable, maintainable, and scalable by contributors across the globe. When contributing to open-source projects—especially those under organizations like the Eclipse Foundation or Apache Foundation—it is necessary to clearly define the scope, structure, and integration points of the software. This mirrors the architecture work done in large companies, helping to build real-world experience that is directly transferable to enterprise systems. Engaging in open source provides the opportunity to design systems that are: Modular and extensibleWell-documented and maintainableScalable and adaptable to change 2. Software Design Beyond architecture, software design ensures that the code written is both functional and efficient. Open source encourages simplicity and pragmatism—every decision is driven by necessity rather than an arbitrary desire to implement complex patterns. In open source, design decisions are: Context-driven: Code is written to serve a specific purpose.Focused on usability: APIs and libraries must be easy to understand and use.Iterative: Design evolves based on real-world feedback and contributions. Rather than adding unnecessary layers and abstractions, open-source projects emphasize clarity and efficiency, a mindset that can help prevent over-engineering in enterprise projects. 3. Documentation A common misconception is that documentation is secondary to writing code. In reality, documentation is a core part of software engineering—and open source demonstrates this principle exceptionally well. Successful open-source projects rely on clear documentation to onboard new contributors. This includes: README files that explain the purpose and usage of a projectAPI documentation for developersDesign guidelines and architectural decisions Improving documentation skills makes work more accessible to others and enables scalability within teams. Companies value engineers who can communicate ideas clearly, making documentation a crucial skill for career advancement. 4. Testing Open-source projects rely on robust testing strategies to ensure code quality and maintainability. Unlike private projects, where tests may be overlooked, open-source software must be reliable enough for anyone to use and extend. By contributing to open source, it is possible to learn how to: Write unit tests, integration tests, and end-to-end testsUse testing frameworks effectivelyAdopt test-driven development (TDD) to improve code quality Testing ensures predictability and stability, making it easier to evolve software over time without introducing breaking changes. 5. Persistence and Data Management Data storage and retrieval are fundamental aspects of software engineering. Open source projects often interact with multiple databases, caching mechanisms, and distributed storage systems. By participating in open source, exposure to various persistence strategies is gained, including: Relational databases (PostgreSQL, MySQL)NoSQL databases (MongoDB, Cassandra)Caching solutions (Redis, Memcached)Hybrid and new SQL approaches Understanding these technologies and their trade-offs helps make informed decisions about handling data efficiently in software projects. 6. Leadership and Communication Technical skills alone won’t make someone a staff engineer or a principal engineer—leadership and communication skills are also essential. Open source provides a unique opportunity to: Collaborate with developers from different backgroundsReview and provide constructive feedback on code contributionsAdvocate for design decisions and improvementsLead discussions on project roadmaps and features If the goal is to influence technical direction, participating in open source teaches how to communicate effectively, defend ideas with evidence, and lead technical initiatives. Becoming an Ultimate Engineer The ultimate engineer understands the context of software development, fights for simplicity, and embraces the six principles above to create impactful software. Open source is one of the best ways to develop these skills in a real-world setting. By incorporating open-source techniques into daily work, engineers can: Build a strong portfolio of contributionsDevelop a deeper understanding of software design and architectureImprove documentation and testing practicesGain expertise in data persistenceEnhance leadership and communication skills A book titled The Ultimate Engineer provides further insights into these six categories and explains how to apply open-source techniques to accelerate career growth. More details can be found here: The Ultimate Engineer. Conclusion Open source is not just about writing code for free—it’s about learning, growing, and making a lasting impact in the industry. Integrating open-source methodologies into daily work improves software engineering skills and positions engineers for career advancement, whether the goal is to become a staff engineer, principal engineer, or even a distinguished fellow. Start today—find an open-source project, contribute, and take your engineering career to the next level!
Stored procedures and functions are implementing the business logic of the database. When migrating the SQL Server database to PostgreSQL, you will need to convert stored procedures and functions properly, paying attention to parameter handling, rowset retrieval, and other specific syntax constructions. SQL Server uses a dialect of SQL called Transact-SQL (or T-SQL) for stored procedures and functions, while PostgreSQL uses Procedural Language/PostgreSQL (or PL/pgSQL) for the same. These languages have significantly different syntax and capabilities, so stored procedures and functions must be carefully analyzed and converted. Also, some T-SQL features have no direct equivalents in PL/pgSQL, and therefore, alternative implementation is required for those cases. Finally, stored procedures and functions must be optimized for the PostgreSQL engine to ensure they perform efficiently. Returning a Rowset Both SQL Server and PostgreSQL allow the return of a rowset, usually the result of a SELECT query, from stored procedures or functions, but the syntax is distinguished. If the stored procedure in T-SQL contains SELECT as the last statement of the body, this means it returns rowset. PL/pgSQL requires either forward declaration of returned rowset as a table or fetching data through refcursor. When returning rowset has just a few columns with clear types, you can use the RETURNS TABLE feature of PostgreSQL. In T-SQL: SQL CREATE PROCEDURE GetCustomerOrders @CustomerID INT AS SELECT OrderID, OrderDate, Amount FROM Orders WHERE CustomerID = @CustomerID; GO In PL/pgSQL, the same may look like this: SQL CREATE OR REPLACE FUNCTION GetCustomerOrders(CustomerID INT) RETURNS TABLE(OrderID INT, OrderDate TIMESTAMP, Amount DECIMAL) AS $$ BEGIN RETURN QUERY SELECT OrderID, OrderDate, Amount FROM Orders WHERE CustomerID = GetCustomerOrders.CustomerID; END; $$ LANGUAGE plpgsql; And the caller PostgreSQL code may look like this: SQL SELECT * FROM GetCustomerOrders(5); If the returning rowset is more complicated and it is hard to determine the data type for each column, the approach above may not work. For those cases, the workaround is to use refcursor. In T-SQL: SQL CREATE PROCEDURE GetSalesByRange @DateFrom DATETIME, @DateTo DATETIME AS SELECT C.CustomerID, C.Name AS CustomerName, C.FirstName, C.LastName, C.Email AS CustomerEmail, C.Mobile, C.AddressOne, C.AddressTwo, C.City, C.ZipCode, CY.Name AS Country, ST.TicketID, TT.TicketTypeID, TT.Name AS TicketType, PZ.PriceZoneID, PZ.Name AS PriceZone, ST.FinalPrice AS Price, ST.Created, ST.TransactionType, COALESCE(VME.ExternalEventID, IIF(E.ExternalID = '', NULL, E.ExternalID), '0') AS ExternalID, E.EventID, ES.[Name] AS Section, ST.RowName, ST.SeatName FROM [Event] E WITH (NOLOCK) INNER JOIN EventCache EC WITH (NOLOCK) ON E.EventID = EC.EventID INNER JOIN SaleTicket ST WITH (NOLOCK) ON E.EventID = ST.EventID INNER JOIN EventSection ES WITH (NOLOCK) ON ST.EventSectionID = ES.EventSectionID INNER JOIN Customer C WITH (NOLOCK) ON ST.CustomerID = C.CustomerID INNER JOIN Country CY WITH (NOLOCK) ON C.CountryID = CY.CountryID INNER JOIN TicketType TT WITH (NOLOCK) ON ST.TicketTypeID = TT.TicketTypeID INNER JOIN PriceZone PZ WITH (NOLOCK) ON ST.PriceZoneID = PZ.PriceZoneID LEFT OUTER JOIN VenueManagementEvent VME ON VME.EventID = E.EventID WHERE ST.Created BETWEEN @DateFrom AND @DateTo ORDER BY ST.Created GO In PL/pgSQL: SQL CREATE OR REPLACE FUNCTION GetSalesByRange ( V_DateFrom TIMESTAMP(3), V_DateTo TIMESTAMP(3), V_rc refcursor ) RETURNS refcursor AS $$ BEGIN OPEN V_rc FOR SELECT C.CustomerID, C.Name AS CustomerName, C.FirstName, C.LastName, C.Email AS CustomerEmail, C.Mobile, C.AddressOne, C.AddressTwo, C.City, C.ZipCode, CY.Name AS Country, ST.TicketID, TT.TicketTypeID, TT.Name AS TicketType, PZ.PriceZoneID, PZ.Name AS PriceZone, ST.FinalPrice AS Price, ST.Created, ST.TransactionType, COALESCE( VME.ExternalEventID, (CASE WHEN E.ExternalID = '' THEN NULL ELSE E.ExternalID END), '0') AS ExternalID, E.EventID, ES.Name AS Section, ST.RowName, ST.SeatName FROM Event E INNER JOIN EventCache EC ON E.EventID = EC.EventID INNER JOIN SaleTicket ST ON E.EventID = ST.EventID INNER JOIN EventSection ES ON ST.EventSectionID = ES.EventSectionID INNER JOIN Customer C ON ST.CustomerID = C.CustomerID INNER JOIN Country CY ON C.CountryID = CY.CountryID INNER JOIN TicketType TT ON ST.TicketTypeID = TT.TicketTypeID INNER JOIN PriceZone PZ ON ST.PriceZoneID = PZ.PriceZoneID LEFT OUTER JOIN VenueManagementEvent VME ON VME.EventID = E.EventID WHERE ST.Created BETWEEN V_DateFrom AND V_DateTo ORDER BY ST.Created; RETURN V_rc; END; $$ LANGUAGE plpgsql; And the caller PostgreSQL code may look like this: SQL BEGIN; SELECT GetSalesByRange( '2024-01-01'::TIMESTAMP(3), '2025-01-01'::TIMESTAMP(3), 'mycursorname' ); FETCH 4 FROM mycursorname; COMMIT; Declaration of Local Variables T-SQL allows local variables to be declared everywhere inside a stored procedure or function body. PL/pgSQL requires that all local variables are declared before BEGIN keyword: SQL CREATE OR REPLACE FUNCTION CreateEvent(…) AS $$ DECLARE v_EventID INT; v_EventGroupID INT; BEGIN … END; $$ LANGUAGE plpgsql; In SQL Server, table variables can be declared as follows: SQL DECLARE @Products TABLE ( ProductID int, ProductTitle varchar(100), ProductPrice decimal (8,2) ) PostgreSQL does not support this feature; temporary tables should be used instead: SQL CREATE TEMP TABLE Products ( ProductID int, ProductTitle varchar(100), ProductPrice decimal (8,2) ) Remember that temporary tables are automatically dropped at the end of the session or the current transaction. If you need to manage the lifetime of the table explicitly, use the DROP TABLE IF EXISTS statement. Pay attention to appropriate SQL Server to PostgreSQL types mapping when converting variables declaration. Last Value of Auto-Increment Column After running INSERT-query, you may need to get the generated value of the auto-increment column. In T-SQL, it may be obtained as SQL CREATE TABLE aitest (id int identity, val varchar(20)); INSERT INTO aitest(val) VALUES ('one'),('two'),('three'); SELECT @LastID = SCOPE_IDENTITY(); PostgreSQL allows access to the last inserted value via an automatically generated sequence that always has the name {tablename}_{columnname}_seq: SQL CREATE TABLE aitest (id serial, val varchar(20)); INSERT INTO aitest(val) VALUES ('one'),('two'),('three'); LastID := currval('aitest_id_seq’); Built-In Functions When migrating stored procedures and functions from SQL Server to PostgreSQL, all specific built-in functions and operators must be converted into equivalents according to the rules below: Function CHARINDEX must be replaced by PostgreSQL equivalent POSITIONFunction CONVERT must be migrated into PostgreSQL according to the rules specified in this articleFunction DATEADD($interval, $n_units, $date) can be converted into PostgreSQL expressions that use the operator + depending on $interval value as follows: DAY / DD / D / DAYOFYEAR / DY ($date + $n_units * interval '1 day')::dateHOUR / HH($date + $n_units * interval '1 hour')::dateMINUTE / MI / N($date + $n_units * interval '1 minute')::dateMONTH / MM / M($date + $n_units * interval '1 month')::dateQUARTER / QQ / Q($date + $n_units * 3 * interval '1 month')::dateSECOND / SS / S($date + $n_units * interval '1 second')::dateWEEK / WW / WK($date + $n_units * interval '1 week')::dateWEEKDAY / DW / W($date + $n_units * interval '1 day')::dateYEAR / YY($date + $n_units * interval '1 year')::date Function DATEDIFF($interval, $date1, $date2) of SQL Server can be emulated in PostgreSQL via DATE_PART as follows: DAY / DD / D / DAYOFYEAR / DY date_part('day', $date2 - $date1)::intHOUR / HH24 * date_part('day', $date2 - $date1)::int + date_part('hour', $date2 - $date1)MINUTE / MI / N1440 * date_part('day', $date2 - $date1)::int + 60 * date_part('hour', $date2 - $date1) + date_part('minute', $date2 - $date1)MONTH / MM / M(12 * (date_part('year', $date2) - date_part('year', $date1))::int + date_part('month', $date2) - date_part('month', $date1))::intSECOND / SS / S86400 * date_part('day', $date2 - $date1)::int + 3600 * date_part('hour', $date2 - $date1) + 60 * date_part('minute', $date2 - $date1) + date_part('second', $date2 - $date1)WEEK / WW / WKTRUNC(date_part('day', $date2 - $date1) / 7)WEEKDAY / DW / Wdate_part('day', $date2 - $date1)::intYEAR / YY(date_part('year', $date2) - date_part('year', $date1))::int Every occurrence of DATEPART must be replaced by DATE_PARTSQL Server function GETDATE must be converted into PostgreSQL NOW()Conditional operator IIF($condition, $first, $second) must be converted into CASE WHEN $condition THEN $first ELSE $second ENDEvery occurrence of ISNULL must be replaced by COALESCESQL Server function REPLICATE must be converted into PostgreSQL equivalent, REPEATEvery occurrence of SPACE($n) must be replaced by REPEAT(' ', $n) Conclusion The migration of stored procedures and functions between two DBMSs is quite a complicated procedure requiring much time and effort. Although it cannot be completely automated, some available tools online could help partially automate the procedure.
Modern software applications often need to support multiple frontend UI like Web, Android, IOS, TV, and VR, each with unique requirements. Traditionally, developers have dependent on a single backend to serve all clients. However, the complexity of serving different frontends needs using a monolithic backend can result in performance bottlenecks, complicated APIs, and unnecessary data interactions. The Backend for Frontend (BFF) architecture helps answer these challenges by creating a dedicated back-end service for each frontend type. Each BFF is dedicated to a specific UI kind, improving performance, UX, and overall system stability and maintainability. A General-Purpose API Backend (Traditional) If different UIs make the same requests, a general-purpose API can work well. However, the mobile or TV experience often differs significantly from a desktop web experience. First, mobile devices have distinct constraints; less screen space limits how much data you can show, and multiple server connections can drain the battery of the device and also impact data (on LTE). Next, mobile API calls differ from desktop API calls. For example, in a traditional Netflix scenario, a desktop app might let users browse movies and shows, buy movies online, and show a lot of information about the movies and shows. On mobile, the features are very limited. As we’ve developed more mobile applications, it's clear that people interact with devices differently, requiring us to expose different capabilities or features. In general, mobile devices make fewer requests and display less data compared to desktop apps. This results in additional features in the API backend to support mobile interfaces. A general-purpose API backend often ends up taking on many responsibilities, which will result in creating a dedicated team to manage the code base and fix bugs. This can lead to increased use of budget, complex team structure, and front-end teams required to coordinate with this separate team to implement changes. This API team has to prioritize requests from various client teams, while also working on integration with downstream APIs. Introducing the Backend For Frontend (BFF) One solution to the traditional general-purpose API issue is to use a dedicated backend for each UI or application type, also known as Backend For Frontend (BFF). Conceptually, the user-facing application has two parts: the client-side application and the server-side component. The BFF is closely aligned with a specific user experience and is typically managed by the same team responsible for the user interface. This makes it easier to tailor and adjust the API to meet the needs of the UI, while also streamlining the release process for both the client and server components. A BFF is only focused on a single user interface, allowing it to be smaller and more targeted in its functionality. How Many BFFs Should We Create? When delivering similar user experiences across different platforms like mobile, TV, desktop, web, AR, and VR, having a separate BFF for each type of client is preferred. For example, both the Android and iOS versions of an app share the same BFF. For all TV clients, for example, Android TV, Apple TV, and Roku TV, all apps use the same BFF, which is customized for TV apps. When all similar platform apps share a BFF, it’s only within the same class of user interface. For example, Netflix's IOS and Android apps share the same BFF, but their TV apps use a different BFF. How Do We Handle Multiple Downstream Services Efficiently? BFFs are a useful architectural pattern when dealing with a few back-end services. However, in organizations with many services, they become essential as the need to aggregate multiple downstream calls to provide user functionality grows significantly. Take, for instance, Netflix, where you want to display a user’s recommendation along with ratings, comments, languages available, CC, trailer, etc. In this scenario, multiple services are responsible for different parts of the information. The recommendation service holds the list of movies and their IDs, the movie catalog service manages item names and ratings, while the comments service tracks comments. The BFF would expose a method to retrieve the complete recommendations, which would require at least three downstream service calls, constructing a recommendations view through multiple downstream calls. From an efficiency perspective, it’s best to run as many of these calls in parallel as possible. After the initial call to the recommendations service, the subsequent calls to the rating and comments services should ideally occur simultaneously to minimize overall response time. Managing parallel and sequential calls, however, can quickly become complicated in more advanced use cases. This is where asynchronous programming models are valuable, as they simplify handling multiple asynchronous calls. Understanding failure modes is also crucial. For instance, while it might seem logical to wait for all downstream calls to succeed before responding to the client, this isn’t always the best approach. If the recommendations service is unavailable, the request can’t proceed, but if only the rating service fails, it may be better to degrade the response by omitting the rating information, instead of failing the entire request. The BFF should handle these scenarios, and the client must be capable of interpreting partial responses and rendering them correctly. Conclusion The BFF pattern is a powerful tool for organizations seeking to deliver optimized, scalable, and efficient frontends for a variety of client types. It allows for better separation of concerns, minimizes complexity in frontend development, and improves overall system performance. While the approach does come with challenges, such as maintaining multiple backends and avoiding code duplication, the benefits often outweigh the downsides for teams working in complex, multi-client environments.
DZone events bring together industry leaders, innovators, and peers to explore the latest trends, share insights, and tackle industry challenges. From Virtual Roundtables to Fireside Chats, our events cover a wide range of topics, each tailored to provide you, our DZone audience, with practical knowledge, meaningful discussions, and support for your professional growth. DZone Events Happening Soon Below, you’ll find upcoming events that you won't want to miss. Modernizing Enterprise Java Applications: Jakarta EE, Spring Boot, and AI Integration Date: February 25, 2025Time: 1:00 PM ET Register for Free! Unlock the potential of AI integration in your enterprise Java applications with our upcoming webinar! Join Payara and DZone to explore how to enhance your Spring Boot and Jakarta EE systems using generative AI tools like Spring AI and REST client patterns. What to Consider When Building an IDP Date: March 4, 2025Time: 1:00 PM ET Register for Free! Is your development team bogged down by manual tasks and “TicketOps”? Internal Developer Portals (IDPs) streamline onboarding, automate workflows, and enhance productivity—but should you build or buy? Join Harness and DZone for a webinar to explore key IDP capabilities, compare Backstage vs. managed solutions, and learn how to drive adoption while balancing cost and flexibility. DevOps for Oracle Applications with FlexDeploy: Automation nd Compliance Made Easy Date: March 11, 2025Time: 1:00 PM ET Register for Free! Join Flexagon and DZone as Flexagon's CEO unveils how FlexDeploy is helping organizations future-proof their DevOps strategy for Oracle Applications and Infrastructure. Explore innovations for automation through compliance, along with real-world success stories from companies who have adopted FlexDeploy. Make AI Your App Development Advantage: Learn Why and How Date: March 12, 2025Time: 10:00 AM ET Register for Free! The future of app development is here, and AI is leading the charge. Join Outsystems and DZone, on March 12th at 10am ET, for an exclusive Webinar with Luis Blando, CPTO of OutSystems, and John Rymer, industry analyst at Analysis.Tech, as they discuss how AI and low-code are revolutionizing development.You will also hear from David Gilkey, Leader of Solution Architecture, Americas East at OutSystems, and Roy van de Kerkhof, Director at NovioQ. This session will give you the tools and knowledge you need to accelerate your development and stay ahead of the curve in the ever-evolving tech landscape. Developer Experience: The Coalescence of Developer Productivity, Process Satisfaction, and Platform Engineering Date: March 12, 2025Time: 1:00 PM ET Register for Free! Explore the future of developer experience at DZone’s Virtual Roundtable, where a panel will dive into key insights from the 2025 Developer Experience Trend Report. Discover how AI, automation, and developer-centric strategies are shaping workflows, productivity, and satisfaction. Don’t miss this opportunity to connect with industry experts and peers shaping the next chapter of software development. Unpacking the 2025 Developer Experience Trends Report: Insights, Gaps, and Putting it into Action Date: March 19, 2025Time: 1:00 PM ET Register for Free! We’ve just seen the 2025 Developer Experience Trends Report from DZone, and while it shines a light on important themes like platform engineering, developer advocacy, and productivity metrics, there are some key gaps that deserve attention. Join Cortex Co-founders Anish Dhar and Ganesh Datta for a special webinar, hosted in partnership with DZone, where they’ll dive into what the report gets right—and challenge the assumptions shaping the DevEx conversation. Their take? Developer experience is grounded in clear ownership. Without ownership clarity, teams face accountability challenges, cognitive overload, and inconsistent standards, ultimately hampering productivity. Don’t miss this deep dive into the trends shaping your team’s future. What's Next? DZone has more in store! Stay tuned for announcements about upcoming Webinars, Virtual Roundtables, Fireside Chats, and other developer-focused events. Whether you’re looking to sharpen your skills, explore new tools, or connect with industry leaders, there’s always something exciting on the horizon. Don’t miss out — save this article and check back often for updates!
Microservices and containers are revolutionizing how modern applications are built, deployed, and managed in the cloud. However, developing and operating microservices can introduce significant complexity, often requiring developers to spend valuable time on cross-cutting concerns like service discovery, state management, and observability. Dapr, or Distributed Application Runtime, is an open-source runtime for building microservices on cloud and edge environments. It provides platform-agnostic building blocks like service discovery, state management, pub/sub messaging, and observability out of the box. Dapr moved to the graduated maturity level of CNCF (Cloud Native Computing Foundation) and is currently used by many enterprises. When combined with Amazon Elastic Kubernetes Service (Amazon EKS), a managed Kubernetes service from AWS, Dapr can accelerate the adoption of microservices and containers, enabling developers to focus on writing business logic without worrying about infrastructure plumbing. Amazon EKS makes managing Kubernetes clusters easy, enabling effortless scaling as workloads change. In this blog post, we'll explore how Dapr simplifies microservices development on Amazon EKS. We'll start by diving into two essential building blocks: service invocation and state management. Service Invocation Seamless and reliable communication between microservices is crucial. However, developers often struggle with complex tasks like service discovery, standardizing APIs, securing communication channels, handling failures gracefully, and implementing observability. With Dapr's service invocation, these problems become a thing of the past. Your services can effortlessly communicate with each other using industry-standard protocols like gRPC and HTTP/HTTPS. Service invocation handles all the heavy lifting, from service registration and discovery to request retries, encryption, access control, and distributed tracing. State Management Dapr's state management building block simplifies the way developers work with the state in their applications. It provides a consistent API for storing and retrieving state data, regardless of the underlying state store (e.g., Redis, AWS DynamoDB, Azure Cosmos DB). This abstraction enables developers to build stateful applications without worrying about the complexities of managing and scaling state stores. Prerequisites In order to follow along this post, you should have the following: An AWS account. If you don’t have one, you can sign up for one.An IAM user with proper permissions. The IAM security principal that you're using must have permission to work with Amazon EKS IAM roles, service-linked roles, AWS CloudFormation, a VPC, and related resources. For more information, see Actions, resources, and condition keys for Amazon Elastic Container Service for Kubernetes and Using service-linked roles in the AWS Identity and Access Management User Guide. Application Architecture In the diagram below, we have two microservices: a Python app and a Node.js app. The Python app generates order data and invokes the /neworder endpoint exposed by the Node.js app. The Node.js app writes the incoming order data to a state store (in this case, Amazon ElastiCache) and returns an order ID to the Python app as a response. By leveraging Dapr's service invocation building block, the Python app can seamlessly communicate with the Node.js app without worrying about service discovery, API standardization, communication channel security, failure handling, or observability. It implements mTLS to provide secure service-to-service communication. Dapr handles these cross-cutting concerns, allowing developers to focus on writing the core business logic. Additionally, Dapr's state management building block simplifies how the Node.js app interacts with the state store (Amazon ElastiCache). Dapr provides a consistent API for storing and retrieving state data, abstracting away the complexities of managing and scaling the underlying state store. This abstraction enables developers to build stateful applications without worrying about the intricacies of state store management. The Amazon EKS cluster hosts a namespace called dapr-system, which contains the Dapr control plane components. The dapr-sidecar-injector automatically injects a Dapr runtime into the pods of Dapr-enabled microservices. Service Invocation Steps The order generator service (Python app) invokes the Node app’s method, /neworder. This request is sent to the local Dapr sidecar, which is running in the same pod as the Python app. Dapr resolves the target app using the Amazon EKS cluster’s DNS provider and sends the request to the Node app’s sidecar.The Node app’s sidecar then sends the request to the Node app microservice.Node app then writes the order ID received from the Python app to Amazon ElasticCache.The node app sends the response to its local Dapr sidecar.Node app’s sidecar forwards the response to the Python app’s Dapr sidecar. Python app side car returns the response to the Python app, which had initiated the request to the Node app's method /neworder. Deployment Steps Create and Confirm an EKS Cluster To set up an Amazon EKS (Elastic Kubernetes Service) cluster, you'll need to follow several steps. Here's a high-level overview of the process: Prerequisites Install and configure the AWS CLIInstall eksctl, kubectl, and AWS IAM Authenticator 1. Create an EKS cluster. Use eksctl to create a basic cluster with a command like: Shell eksctl create cluster --name my-cluster --region us-west-2 --node-type t3.medium --nodes 3 2. Configure kubectl. Update your kubeconfig to connect to the new cluster: Shell aws eks update-kubeconfig --name my-cluster --region us-west-2 3. Verify the cluster. Check if your nodes are ready: Shell kubectl get nodes Install DAPR on Your EKS cluster 1. Install DAPR CLI: Shell wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash 2. Verify installation: Shell dapr -h 3. Install DAPR and validate: Shell dapr init -k --dev dapr status -k The Dapr components statestore and pubsub are created in the default namespace. You can check it by using the command below: Shell dapr components -k Configure Amazon ElastiCache as Your Dapr StateStore Create Amazon ElastiCache to store the state for the microservice. In this example, we are using ElastiCache serverless, which quickly creates a cache that automatically scales to meet application traffic demands with no servers to manage. Configure the security group of the ElastiCache to allow connections from your EKS cluster. For the sake of simplicity, keep it in the same VPC as your EKS cluster. Take note of the cache endpoint, which we will need for the subsequent steps. Running a Sample Application 1. Clone the Git repo of the sample application: Shell git clone https://github.com/dapr/quickstarts.git 2. Create redis-state.yaml and provide an Amazon ElasticCache endpoint for redisHost: YAML apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: statestore namespace: default spec: type: state.redis version: v1 metadata: - name: redisHost value: redisdaprd-7rr0vd.serverless.use1.cache.amazonaws.com:6379 - name: enableTLS value: true Apply yaml configuration for state store component using kubectl. Shell kubectl apply -f redis-state.yaml 3. Deploy microservices with the sidecar. For the microservice node app, navigate to the /quickstarts/tutorials/hello-kubernetes/deploy/node.yaml file and you will notice the below annotations. It tells the Dapr control plane to inject a sidecar and also assigns a name to the Dapr application. YAML annotations: dapr.io/enabled: "true" dapr.io/app-id: "nodeapp" dapr.io/app-port: "3000" Add an annotation service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing" in node.yaml to create AWS ELB. YAML kind: Service apiVersion: v1 metadata: name: nodeapp annotations: service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing" labels: app: node spec: selector: app: node ports: - protocol: TCP port: 80 targetPort: 3000 type: LoadBalancer Deploy the node app using kubectl. Navigate to the directory /quickstarts/tutorials/hello-kubernetes/deploy and execute the below command. Shell kubectl apply -f node.yaml Obtain the AWS NLB, which appears under External IP, on the output of the below command. Shell kubectl get svc nodeapp http://k8s-default-nodeapp-3a173e0d55-f7b14bedf0c4dd8.elb.us-east-1.amazonaws.com Navigate to the /quickstarts/tutorials/hello-kubernetes directory, which has sample.json file to execute the below step. Shell curl --request POST --data "@sample.json" --header Content-Type:application/json http://k8s-default-nodeapp-3a173e0d55-f14bedff0c4dd8.elb.us-east-1.amazonaws.com/neworder You can verify the output by accessing /order endpoint using the load balancer in a browser. Plain Text http://k8s-default-nodeapp-3a173e0d55-f7b14bedff0c4dd8.elb.us-east-1.amazonaws.com/order You will see the output as {“OrderId”:“42”} Next, deploy the second microservice Python app, which has a business logic to generate a new order ID every second and invoke the Node app’s method /neworder. Navigate to the directory /quickstarts/tutorials/hello-kubernetes/deploy and execute the below command. Shell kubectl apply -f python.yaml 4. Validating and testing your application deployment. Now that we have both the microservices deployed. The Python app is generating orders and invoking /neworder as evident from the logs below. Shell kubectl logs --selector=app=python -c daprd --tail=-1 SystemVerilog time="2024-03-07T12:43:11.556356346Z" level=info msg="HTTP API Called" app_id=pythonapp instance=pythonapp-974db9877-dljtw method="POST /neworder" scope=dapr.runtime.http-info type=log useragent=python-requests/2.31.0 ver=1.12.5 time="2024-03-07T12:43:12.563193147Z" level=info msg="HTTP API Called" app_id=pythonapp instance=pythonapp-974db9877-dljtw method="POST /neworder" scope=dapr.runtime.http-info type=log useragent=python-requests/2.31.0 ver=1.12.5 We can see that the Node app is receiving the requests and writing to the state store Amazon ElasticCache in our example. Shell kubectl logs —selector=app=node -c node —tail=-1 SystemVerilog Got a new order! Order ID: 367 Successfully persisted state for Order ID: 367 Got a new order! Order ID: 368 Successfully persisted state for Order ID: 368 Got a new order! Order ID: 369 Successfully persisted state for Order ID: 369 In order to confirm whether the data is persisted in Amazon ElasticCache we access the endpoint /order below. It returns the latest order ID, which was generated by the Python app. Plain Text http://k8s-default-nodeapp-3a173e0d55-f7b14beff0c4dd8.elb.us-east-1.amazonaws.com/order You will see an output with the most recent order as {“OrderId”:“370”}. Clean up Run the below command to delete the deployments Node app and Python app along with the state store component. Navigate to the /quickstarts/tutorials/hello-kubernetes/deploy directory to execute the below command. YAML kubectl delete -f node.yaml kubectl delete -f python.yaml You can tear down your EKS cluster using the eksctl command and delete Amazon ElastiCache. Navigate to the directory that has the cluster.yaml file used to create the cluster in the first step. Shell eksctl delete cluster -f cluster.yaml Conclusion Dapr and Amazon EKS form a powerful alliance for microservices development. Dapr simplifies cross-cutting concerns, while EKS manages Kubernetes infrastructure, allowing developers to focus on core business logic and boost productivity. This combination accelerates the creation of scalable, resilient, and observable applications, significantly reducing operational overhead. It's an ideal foundation for your microservices journey. Watch for upcoming posts exploring Dapr and EKS's capabilities in distributed tracing and observability, offering deeper insights and best practices.
Heroku now officially supports .NET! .NET developers now have access to the officially supported buildpack for .NET, which means you can now deploy your .NET apps onto Heroku with just one command: git push heroku main. Gone are the days of searching for Dockerfiles or community buildpacks. With official support, .NET developers can now run any .NET application (version 8.0 and higher) on the Heroku platform. Being on the platform means you also get: Simple, low-friction deploymentScaling and service managementAccess to the add-on ecosystemSecurity and governance features for enterprise use Intrigued? Let’s talk about what this means for .NET developers. Why This Matters for .NET Developers In my experience, running an app on Heroku is pretty easy. But deploying .NET apps was an exception. You could deploy on Heroku, but there wasn’t official support. One option was to wrap your app in a Docker container. This meant creating a Dockerfile and dealing with all the maintenance that comes along with that approach. Alternatively, you could find a third-party buildpack, but that introduced another dependency into your deployment process, and you’d lose time trying to figure out which community buildpack was the right one for you. Needing to use these workarounds was unfortunate, as Heroku’s seamless deployment is supposed to make it easy to create and prototype new apps. Now, with official buildpack support, the deployment experience for .NET developers is smoother and more reliable. Key Benefits of .NET on Heroku The benefits of the new update center around simplicity and scalability. It all begins with simple deployment. Just one git command… and your deployment begins. No need to start another workflow or log into another site every time; just push your code from the command line, and Heroku takes care of the rest. Heroku’s official .NET support currently includes C#, Visual Basic, and F# projects for .NET and ASP.NET Core frameworks (version 8.0 and higher). This means that a wide variety of .NET projects are now officially supported. Want to deploy a Blazor app alongside your ASP.NET REST API? You can do that now. Coming into the platform also means you can scale as your app grows. If you need to add another service using a different language, you can deploy that service just as easily as your original app. Or you can easily scale your dynos to match peak load requirements. This scaling extends to Heroku’s ecosystem of add-ons, making it easy for you to add value to your application with supporting services while keeping you and your team focused on your core application logic. In addition to simple application deployment, the platform also supports more advanced CI/CD and DevOps needs. With Heroku Pipelines, you have multiple deployment environment support options and can set up review apps so code reviewers can access a live version of your app for each pull request. And all of this integrates tightly with GitHub, giving you automatic deployment triggers to streamline your dev flow. Getting Started Let’s do a quick walk-through on how to get started. In addition to your application and Git, you will also need the Heroku CLI installed on your local machine. Initialize the CLI with the heroku login command. This will take you to a browser to log into your Heroku account: Once you’re logged in, navigate to your .NET application folder. In that folder, run the following commands: Plain Text ~/project$ heroku create ~/project$ heroku buildpacks:add heroku/dotnet Now, you’re ready to push your app! You just need one command to go live: Plain Text ~/project$ git push heroku main That’s it! For simpler .NET applications, this is all you need. Your application is now live at the app URL provided in the response to your heroku create command. To see it again, you can always use heroku info. Or, you can run heroku open to launch your browser at your app URL. If you can’t find the URL, log in to the Heroku Dashboard. Find your app and click on Open app. You’ll be redirected to your app URL. If you have a more complex application or one with multiple parts, you will need to define a Procfile, which will tell Heroku how to start up your application. Don’t be intimidated! Many Procfiles are just a couple of lines. For more in-depth information, check out the Getting Started on Heroku with .NET guide. Now, we’ve got another question to tackle. Who Should Care? The arrival of .NET on Heroku is relevant to anyone who wants to deploy scalable .NET services and applications seamlessly. For solo devs and startups, the platform’s low friction and scaling take away the burden of deployment and hosting. This allows small teams to focus on building out their core application logic. These teams are also not restricted by their app’s architecture, as Heroku supports both large single-service applications as well as distributed microservice apps. Enterprise teams are poised to benefit from this as well. .NET has historically found much of its adoption in the enterprise, and the addition of official support for .NET to Heroku means that these teams can now combine their .NET experience with the ease of deploying to the Heroku platform. Heroku’s low friction enables rapid prototyping of new applications, and Dyno Formations make it easier to manage and scale a microservice architecture. Additionally, you can get governance through Heroku Enterprise, enabling the security and controls that larger enterprises require. Finally, .NET enthusiasts from all backgrounds and skill levels can now benefit from this new platform addition. By going with a modern PaaS, you can play around with apps and projects of all sizes hassle-free. Wrap-Up That’s a brief introduction to official .NET support on Heroku! It’s now easier than ever to deploy .NET applications of all sizes to Heroku. What are you going to build and deploy first? Let me know in the comments!
With Terraform 1.5 and later, you can use the import block to manage the import of resources directly in your configuration. This feature simplifies the process of importing existing infrastructure into Terraform state, eliminating the need for a separate CLI terraform import command. In this article, we explain the import block and how to use it to import different resources. What Is a Terraform Import Block? The Terraform import block introduced in Terraform v1.5.0 provides a declarative approach for importing existing infrastructure resources into a Terraform state file. It allows resource imports to become an integral part of Terraform’s planning process — similar to other managed resources — rather than being treated as a direct state operation. As a result, the import block improves transparency and aligns resource imports with the core principles of infrastructure as code (IaC), enabling users to manage their infrastructure more effectively and predictably. The syntax for an import block in Terraform is as follows: Plain Text import { to = <resource_address> id = <resource_identifier> } to: Specifies the resource address in your configuration where the imported resource will be mapped. id: Defines the unique identifier of the existing resource in the provider’s API. Ensure that your Terraform provider is correctly configured to access the resource being imported. Note that some resource types may have additional requirements or constraints for importing. Import Block vs. Terraform Import Command An import block in Terraform lets you define resources directly in your configuration file, simplifying the management of existing infrastructure. In contrast, when the terraform import command is used without an import block, it links an existing resource to the Terraform state but does not automatically generate the corresponding configuration in your code. You must manually add this configuration afterward. The import command is particularly useful for one-time imports or transitioning infrastructure into Terraform management. Both methods require careful handling to ensure consistency between the Terraform state and the actual infrastructure. Import blocks are generally better suited for ongoing resource management, whereas the standalone command works well for occasional imports. Example 1: Using Terraform Import Block to Import an S3 Bucket Let’s suppose we have an existing AWS S3 bucket (my-existing-bucket) that you want to manage with Terraform. The resource block specifies the S3 bucket (aws_s3_bucket.example) and the bucket attribute defines the name of the existing bucket: Plain Text resource "aws_s3_bucket" "example" { bucket = "my-existing-bucket" } import { to = aws_s3_bucket.example id = "my-existing-bucket" } The import block links the existing S3 bucket to the Terraform resource. to: Maps the imported resource to the address of the resource block (aws_s3_bucket.example)id: Specifies the unique ID of the bucket (my-existing-bucket). When you run terraform plan, Terraform reads the import block, checks the state of the existing S3 bucket, and shows a preview of the changes it will make to the state file. Then, after we run terraform apply, Terraform updates the state file to include the existing bucket, mapping it to the aws_s3_bucket.example resource. After running terraform apply and successfully importing the resource, it is a best practice to remove the import block. Keeping it won’t cause any harm, but removing it helps maintain a clean configuration and minimizes potential confusion during future state management. Example 2: Using Terraform Import Block to Import an EC2 Instance Let’s consider another example: We have an existing EC2 instance with the ID i-1234567890abcdef0 and want to bring it under Terraform management. We define the aws_instance resource we want Terraform to manage in the resource block. Make sure the attributes (e.g., ami, instance_type) match the existing instance’s configuration: Plain Text resource "aws_instance" "example" { ami = "ami-0abcdef1234567890" # Replace with the actual AMI ID instance_type = "t2.micro" } import { to = aws_instance.example id = "i-1234567890abcdef0" } In the import block: to: Maps the resource in your configuration (aws_instance.example) to the existing resource.id: Specifies the unique ID of the EC2 instance you are importing. Once you add the resource block and the import statement to your Terraform configuration file, run terraform plan to preview the changes. Next, run terraform apply to import the resource into Terraform’s state file. After the import, Terraform will manage the existing EC2 instance, ensuring its configuration remains declarative. Example 3: Using Terraform Import Block to Import an Azure Resource Group In the next example, we will be importing an Azure resource group. We have an existing Azure resource group named example-resource-group in the East US region, and we want to manage it with Terraform. First, in the resource block, we define the azurerm_resource_group resource that Terraform will manage: Plain Text resource "azurerm_resource_group" "example" { name = "example-resource-group" location = "East US" } import { to = azurerm_resource_group.example id = "/subscriptions/<subscription_id>/resourceGroups/example-resource-group" } The import block: to: Maps the resource in your configuration (azurerm_resource_group.example) to the existing Azure resource.id: Specifies the fully qualified Azure resource ID of the resource group. Remember to replace <subscription_id> with your actual subscription ID. Add the resource and the import block to your Terraform configuration file. Next, run the terraform plan command to preview the changes and execute terraform apply to apply the changes and import the resource into Terraform’s state file. Can You Use the Terraform Import Block Conditionally? The Terraform import block is designed to be declarative and requires specific values known at plan time. Therefore, it cannot be used conditionally within your Terraform code. The import block does not support dynamic expressions or variables for determining the import ID based on conditions. Attempts to use constructs like count or variables within the import block will result in errors, as Terraform does not allow such arguments in this context. Key Points The introduction of the import block in Terraform 1.5+ simplifies resource management by enabling the direct import and definition of resources within configuration files. It aligns with IaC principles by reducing complexity and making it easier to integrate existing infrastructure into Terraform configurations.
If there’s one thing Web3 devs can agree on, it’s that Sybils suck. Bots and fake accounts are ruining airdrops, gaming economies, DAOs, and DeFi incentives. Everyone’s trying to fight them, but the solutions are either too centralized and non-private (KYC) or too easy to game (staking-based anti-Sybil tricks). That’s where Biomapper comes in handy — an on-chain tool that links one EVM account to one human for verifying that users are real, unique humans without KYC or exposing personal data. How Biomapper Works (Without Screwing Up Privacy) Alright, so Biomapper is cross-chain — but how does it actually work? More importantly, how does it verify that someone is a real human without exposing their real-world identity? Here’s the TL;DR: User scans their face using the Biomapper App.Their biometric data is encrypted inside a Confidential Virtual Machine (CVM) (which means no one — not even Humanode — can see or access it).A Bio-token is generated and linked to their EVM wallet address.They bridge their bio-token to the required chain.When they interact with a dApp, the smart contract checks the Bio-token to confirm they’re a unique person.Done. No identity leaks, no personal data floating around — just proof that they’re not a bot. Why This Is Different from Other Anti-Sybil Methods No KYC. No passports, IDs, or personal data needed.No staking requirements. You can’t just buy your way past the system.No centralized verification authority. No one controls the user list.Privacy-first. Biometrics are never stored or shared with dApps. How Projects on Avalanche Can Use Biomapper Once a user is biomapped, any EVM dApp can check their uniqueness with a single smart contract call. That means: Airdrop contracts. Only real humans can claim.DAO voting. One person, one vote. No governance takeovers.Game reward systems. No multi-account farming.NFT whitelists. Verified users only, without KYC.And many more use cases. It’s Sybil resistance without the usual headaches. And integrating it into your dApp? That’s easy. Let’s go step-by-step on how to set it up. How to Integrate Your dApps With Biomapper Smart Contracts Getting Started Before you begin, make sure you're familiar with the core Biomapper concepts, such as: Generations. CVMs with Biomapping data resets periodically.General Integration Flow. Users biomap once, bridge it to the specific chain, and can be verified across dApps. What You Need to Do Write a smart contract that interacts with Bridged Biomapper on Avalanche C-Chain.Add a link to the Biomapper UI on your frontend, so users can complete their biomapping. Installation: Set Up Your Development Environment Before you begin, install the required Biomapper SDK dependencies. Install the Biomapper SDK Humanode Biomapper SDK provides interfaces and useful utilities for developing smart contracts and dApps that interact with the Humanode Biomapper and Bridged Biomapper smart contracts. It is usable with any tooling the modern EVM smart contract development ecosystem provides, including Truffle, Hardhat, and Forge. It is open-source and is available on GitHub, you will find the links to the repo, examples, and the generated code documentation below. Using npm (Hardhat/Node.js projects): YAML npm install --save @biomapper-sdk/core @biomapper-sdk/libraries @biomapper-sdk/events Using yarn: YAML yarn add @biomapper-sdk/core @biomapper-sdk/libraries @biomapper-sdk/events Using Foundry: If you're using Foundry, add the Biomapper SDK as a dependency: YAML forge install humanode-network/biomapper-sdk These packages allow you to interact with Biomapper smart contracts and APIs. Smart Contract Development The next step is to integrate Bridged Biomapper into your smart contract. Step 1: Import Biomapper Interfaces and Libraries In your Solidity smart contract, import the necessary interfaces and libraries from the Biomapper SDK. Rust // Import the IBridgedBiomapperRead interface import { IBridgedBiomapperRead } from "@biomapper-sdk/core/IBridgedBiomapperRead.sol"; // Import the IBiomapperLogRead interface import { IBiomapperLogRead } from "@biomapper-sdk/core/IBiomapperLogRead.sol"; // Import the BiomapperLogLib library import { BiomapperLogLib } from "@biomapper-sdk/libraries/BiomapperLogLib.sol"; These imports provide your smart contract with the necessary functions to verify user uniqueness and access biomapping logs. Step 2: Use Bridged Biomapper on Avalanche Since Humanode has already deployed the Bridged Biomapper contract on Avalanche C-Chain, your dApp should interact with it instead of deploying a new Biomapper contract. Smart contract example: Rust pragma solidity ^0.8.0; // Import the Bridged Biomapper Read interface import "@biomapper-sdk/core/IBridgedBiomapperRead.sol"; contract MyDapp { IBridgedBiomapperRead public biomapper; constructor(address _biomapperAddress) { biomapper = IBridgedBiomapperRead(_biomapperAddress); } function isUserUnique(address user) public view returns (bool) { return biomapper.isBridgedUnique(user); } } What this does: Connects your contract to the official Bridged Biomapper contract on Avalanche.Allows your contract to verify if a user has been biomapped and is unique. To access the contract addresses, APIs, and more information about the particular contracts, functions, and events from the official Biomapper SDK documentation. Step 3: Using Mock Contracts for Local Development For local testing, you can use the MockBridgedBiomapper contract. This allows developers to simulate the integration before deploying to testnet or mainnet. Example usage: Rust function generationsBridgingTxPointsListItem(uint256 ptr) external view returns (GenerationBridgingTxPoint memory); Refer to the Biomapper SDK Docs for technical details on using mock contracts. Step 4: Calling Biomapper Functions in Your Smart Contracts Checking user uniqueness: Before allowing a user to claim rewards or access features, verify whether they have a valid biomapping. Rust function isUnique(IBiomapperLogRead biomapperLog, address who) external view returns (bool); If the function returns false, prompt the user to complete the verification process. Implementing Unique User Verification Here’s an example smart contract that ensures each user is unique before accessing in-game rewards: Rust using BiomapperLogLib for IBiomapperLogRead; IBiomapperLogRead public immutable BIOMAPPER_LOG; IBridgedBiomapperRead public immutable BRIDGED_BIOMAPPER; mapping(address => bool) public hasClaimedReward; event RewardClaimed(address player); constructor(address biomapperLogAddress, address bridgedBiomapperAddress) { BIOMAPPER_LOG = IBiomapperLogRead(biomapperLogAddress); BRIDGED_BIOMAPPER = IBridgedBiomapperRead(bridgedBiomapperAddress); } function claimGameReward() public { require(!hasClaimedReward[msg.sender], "Reward already claimed"); require(BIOMAPPER_LOG.biomappingsHead(msg.sender) != 0, "User is not biomapped"); require(BRIDGED_BIOMAPPER.biomappingsHead(msg.sender) != 0, "User is not biomapped on bridged chain"); hasClaimedReward[msg.sender] = true; emit RewardClaimed(msg.sender); } } Frontend Integration Integrating Biomapper on the frontend is simple — just add a link to the Biomapper App so users can verify themselves. HTML <a href="https://biomapper.humanode.io" target="_blank"> Verify Your Uniqueness with Biomapper </a> What this does: Redirects users to the Biomapper App, where they scan their biometrics.Once verified, their wallet is biomapped and linked to their EVM address. Testing and Rollout Once you have verified your contract works locally, deploy it to Avalanche C-Chain. Summing Up With Humanode Biomapper live on Avalanche C-Chain, developers now have a privacy-preserving, Sybil-resistant way to verify real users without KYC. Whether for airdrops, DAO governance, gaming, or DeFi, Biomapper ensures fairness by preventing bots and multi-wallet exploits. Once integrated, your dApp is now protected against Sybil attacks while maintaining user privacy. To take it further: Get your dApp listed in the Biomapper App by reaching out to HumanodeDeploy on other EVM-compatible chains beyond AvalancheExplore Biomapper's cross-chain capability A more human Web3 is now possible. Start integrating today. For more details, visit the Biomapper SDK Docs and Biomapper docs.
There are scenarios where we would not want to use commercial large language models (LLMs) because the queries and data would go into the public domain. There are ways to run open-source LLMs locally. This article explores the option of running Ollama locally interfaced with the Sprint boot application using the SpringAI package. We will create an API endpoint that will generate unit test cases for the Java code that has been passed as part of the prompt using AI with the help of Ollama LLM. Running Open-Source LLM Locally 1. The first step is to install Ollama; we can go to ollama.com, download the equivalent OS version, and install it. The installation steps are standard, and there is nothing complicated. 2. Please pull the llama3.2 model version using the following: PowerShell ollama pull llama3.2 For this article, we are using the llama3.2 version, but with Ollama, we can run a number of other open-source LLM models; you can find the list over here. 3. After installation, we can verify that Ollama is running by going to this URL: http://localhost:11434/. You will see the following status "Ollama is running": 4. We can also use test containers to run Ollama as a docker container and install the container using the following command: PowerShell docker run -d -v ollama:/root/.ollama -p 11438:11438 --name ollama ollama/ollama Since I have Ollama, I have been running locally using local installation using port 11434; I have swapped the port to 11438. Once the container is installed, you can run the container and verify that Ollama is running on port 11438. We can also verify the running container using the docker desktop as below: SpringBoot Application 1. We will now create a SpringBoot application using the Spring Initializer and then install the SpringAI package. Please ensure you have the following POM configuration: XML <repositories> <repository> <id>spring-milestones</id> <name>Spring Milestones</name> <url>https://repo.spring.io/milestone</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>spring-snapshots</id> <name>Spring Snapshots</name> <url>https://repo.spring.io/snapshot</url> <releases> <enabled>false</enabled> </releases> </repository> </repositories> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.ai</groupId> <artifactId>spring-ai-ollama-spring-boot-starter</artifactId> <version>1.0.0-SNAPSHOT</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.ai</groupId> <artifactId>spring-ai-ollama</artifactId> <version>1.0.0-M6</version> </dependency> </dependencies> 2. We will then configure the application.properties for configuring the Ollama model as below: Properties files spring.application.name=ollama spring.ai.ollama.base-url=http://localhost:11434 spring.ai.ollama.chat.options.model=llama3.2 3. Once the spring boot application is running, we will write code to generate unit tests for the Java code that is passed as part of the prompt for the spring boot application API. 4. We will first write a service that will interact with the Ollama model; below is the code snippet: Java @Service public class OllamaChatService { @Qualifier("ollamaChatModel") private final OllamaChatModel ollamaChatModel; private static final String INSTRUCTION_FOR_SYSTEM_PROMPT = """ We will using you as a agent to generate unit tests for the code that is been passed to you, the code would be primarily in Java. You will generate the unit test code and return in back. Please follow the strict guidelines If the code is in Java then only generate the unit tests and return back, else return 'Language not supported answer' If the prompt has any thing else than the Java code provide the answer 'Incorrect input' """; public OllamaChatService(OllamaChatModel ollamaChatClient) { this.ollamaChatModel = ollamaChatClient; } public String generateUnitTest(String message){ String responseMessage = null; SystemMessage systemMessage = new SystemMessage(INSTRUCTION_FOR_SYSTEM_PROMPT); UserMessage userMessage = new UserMessage(message); List<Message> messageList = new ArrayList<>(); messageList.add(systemMessage); messageList.add(userMessage); Prompt userPrompt = new Prompt(messageList); ChatResponse extChatResponse = ollamaChatModel.call(userPrompt); if (extChatResponse != null && extChatResponse.getResult() != null && extChatResponse.getResult().getOutput() != null){ AssistantMessage assistantMessage = ollamaChatModel.call(userPrompt).getResult().getOutput(); responseMessage = assistantMessage.getText(); } return responseMessage; } } 5. Please take a look at the PROMPT_INSTRUCTIONS; we'd like to define the chat agent's purpose and responsibility. We are enforcing the responsibility to generate unit test code for Java. If anything else is sent, the Prompt answer will be returned as "Incorrect Input." 6. Then, we will build an API endpoint, which will interact with the chat service. Java @RestController @RequestMapping("/api/ai/ollama") public class OllamaChatController { @Autowired OllamaChatService ollamaChatService; @PostMapping("/unit-test") public ChatResponse generateUnitTests(@RequestBody ChatRequest request) { String response = this.ollamaChatService.generateUnitTest(request.getPrompt()); ChatResponse chatResponse = new ChatResponse(); chatResponse.setMessage(response); return chatResponse; } } Running the API Endpoint Generate unit tests. Below is the sample output from the API for generating unit tests for the random number method. Here are the unit tests for the given Java code: Java import org.junit.jupiter.api.Test; import static org.junit.jupiter.api.Assertions.*; import java.util.Random; public class RandomNumberGeneratorTest { @Test public void testGetRandomNumberMinAndMaxSame() { int min = 10; int max = 10; try { int randomNumber = getRandomNumber(min, max); fail("Expected IllegalArgumentException"); } catch (IllegalArgumentException e) { assertEquals("Max must be greater than min", e.getMessage()); } } @Test public void testGetRandomNumberMinLessThanZero() { int min = -10; int max = 10; try { int randomNumber = getRandomNumber(min, max); fail("Expected IllegalArgumentException"); } catch (IllegalArgumentException e) { assertEquals("Max must be greater than min", e.getMessage()); } } @Test public void testGetRandomNumberMinGreaterThanMax() { int min = 10; int max = 5; try { int randomNumber = getRandomNumber(min, max); fail("Expected IllegalArgumentException"); } catch (IllegalArgumentException e) { assertEquals("Max must be greater than min", e.getMessage()); } } @Test public void testGetRandomNumberMinAndMaxValid() { int min = 10; int max = 20; int randomNumber = getRandomNumber(min, max); assertTrue(randomNumber >= min && randomNumber <= max); } @Test public void testGetRandomNumberMultipleTimes() { Random random = new Random(); int min = 1; int max = 10; int numberOfTests = 10000; for (int i = 0; i < numberOfTests; i++) { int randomNumber1 = getRandomNumber(min, max); int randomNumber2 = getRandomNumber(min, max); assertTrue(randomNumber1 != randomNumber2); } } } Random input: Conclusion Integrating Ollama locally ensures data privacy and saves costs compared to closed-source commercial LLM models. Closed-source commercial LLM models are powerful, but this approach provides an alternative if open-source models can perform simple tasks. You can find the source code on GitHub.
The blend of retrieval-augmented generation (RAG) and generative AI models has brought changes to natural language processing by improving the responses to queries. In the realm of Agentic RAG, this conventional method of relying on a monolithic model for tasks has been enhanced by introducing modularity and autonomy. By breaking down the problem-solving process into tools integrated within an agent, Agentic RAG provides benefits like accuracy, transparency, scalability, and debugging capabilities. The Vision Behind Agentic RAG for Text-to-SQL Traditional RAG systems often retrieve relevant documents and rely on a single monolithic model to generate responses. Although this is an effective method in some cases, when it comes to structural outputs like the case of generating SQL, this approach may not be the most effective. This is where we can leverage the power of the Agentic RAG framework, where we: Divide the tasks into smaller, more manageable tools within an agentImprove accuracy by assigning tasks to specialized toolsEnhance transparency by tracing the reasoning and workflow of each toolSimplify scaling and debugging through modular design Let's talk about how this tool works and the role each component plays in transforming user questions into accurate SQL queries. Architecture Overview The structure comprises an agent utilizing tools within the text-to-SQL workflow. The process can be summarized as follows: User Query → Query Transformation Tool → Few Shot Prompting Tool → Hybrid Search Tool → Re Ranking Tool → Table Retrieval Tool → Prompt Building Tool → LLM Execution Tool → SQL Execution Tool → Final Output 1. User Query Transformation Tool This tool would entail processing the user query for a better understanding of the LLM. It addresses ambiguities, rephrases user questions, translates abbreviations into their forms, and provides context when necessary. Enhancements Handle temporal references. Map terms like "as of today" or "till now" to explicit dates.Replace ambiguous words. For example, "recent" could be replaced by "last 7 days."Connecting shorthand or abbreviations to their names. Example Input: "Show recent sales MTD." Transformed query: "Retrieve sales data for the last 7 days (Month to Date)." Python from datetime import date, timedelta def transform_query(user_query): # Handle open-ended temporal references today = date.today() transformations = { "as of today": f"up to {today}", "till now": f"up to {today}", "recent": "last 7 days", "last week": f"from {today - timedelta(days=7)} to {today}", } for key, value in transformations.items(): user_query = user_query.replace(key, value) # Map common abbreviations abbreviations = { "MTD": "Month to Date", "YTD": "Year to Date", } for abbr, full_form in abbreviations.items(): user_query = user_query.replace(abbr, full_form) return user_query query_transform_tool = Tool( name="Query Transformer", func=transform_query, description="Refines user queries for clarity and specificity, handles abbreviations and open-ended terms." ) 2. Few Shot Prompting Tool This tool makes a call to the LLM to identify the question of a kind from a set (we can also say matching the template). The matched question enhances the prompt with an example SQL query. Example Workflow 1. Input question: "Show me total sales by product for the 7 days." 2. Predefined templates: "Show sales grouped by region." → Example SQL; SELECT region, SUM(sales) ..."Show total sales by product." → Example SQL; SELECT product_name, SUM(sales) ... 3. Most similar question: "Show total sales by product." 4. Output example SQL: SELECT product_name, SUM(sales) FROM ... Python from langchain.chat_models import ChatOpenAI llm = ChatOpenAI(model="gpt-4") predefined_examples = { "Show sales grouped by region": "SELECT region, SUM(sales) FROM sales_data GROUP BY region;", "Show total sales by product": "SELECT product_name, SUM(sales) FROM sales_data GROUP BY product_name;", } def find_similar_question(user_query): prompt = "Find the most similar question type for the following user query:\n" prompt += f"User Query: {user_query}\n\nOptions:\n" for example in predefined_examples.keys(): prompt += f"- {example}\n" prompt += "\nRespond with the closest match." response = llm.call_as_function(prompt) most_similar = response['content'] return predefined_examples.get(most_similar, "") few_shot_tool = Tool( name="Few-Shot Prompting", func=find_similar_question, description="Finds the most similar question type using an additional LLM call and retrieves the corresponding example SQL." ) 3. Hybrid Search Tool For a robust retrieval, this tool combines semantic search, keyword search based on BM25, and keyword-based mapping. The search results from these search methods are put together using reciprocal rank fusion. How does it all come together? Keyword Table Mapping This approach maps the tables to the keywords that are contained in the query. For example: The presence of "sales" results in the sales table being shortlisted.The presence of "product" results in the products table being shortlisted. Keyword Overlap Mapping (BM25) This is a search method based on keyword overlap that shortlists tables in line with relevance. For this, we shall apply the BM25 technique. This sorts the papers in order of relevance to a user search. This search technique considers term saturation in view as well as TF-IDF (Term Frequency-Inverse Document Frequency). Term Frequency (TF) helps one to measure the frequency of a term in a given document. The Inverse Document Frequency (IDF) approach underlines words that show up in every document lessening importance. Normalizing takes document length into account to prevent any bias toward longer papers. Given: sales_data: Contains terms like "sales," "date," "product."products: Contains terms like "product," "category."orders: Contains terms like "order," "date," "customer."financials: Contains terms like "revenue," "profit," "expense." User query: "Show total sales by product." Identify terms in the user query: ["sales," "product"].Sort every document (based on frequency and relevance of these terms) in DataBaseTable. Relevance of documents: sales: High relevance due to both "sales" and "product"products: High relevance due to "product." orders: Lower relevance due to the presence of only "sales."financials: Not relevant. Output: Ranked list: [products, sales_data, orders, financials] Semantic Search In this search method, as the name suggests, we find semantically similar tables utilizing vector embeddings. We achieve this by calculating a similarity score, such as cosine similarity, between the document (table vectors) and user query vectors. Reciprocal Rank Fusion Combines BM25 and semantic search results using reciprocal rank fusion strategy, which is explained a little more in detail below: Reciprocal Rank Fusion (RRF) combining BM25 and semantic search: RRF is a method to combine results from multiple ranking algorithms (e.g., BM25 and semantic search). It assigns a score to each document based on its rank in the individual methods, giving higher scores to documents ranked higher across multiple methods. RRF formula: RRF(d) = Σ(r ∈ R) 1 / (k + r(d)) Where: d is a documentR is the set of rankers (search methods)k is a constant (typically 60)r(d) is the rank of document d in search method r Step-by-Step Example Input data. 1. BM25 ranking results: products (Rank 1)sales_data (Rank 2)orders (Rank 3) 2. Semantic search ranking results: sales_data (Rank 1)financials (Rank 2)products (Rank 3) Step-by-Step Fusion For each table, compute the score: 1. sales_data BM25 Rank = 2, Semantic Rank = 1RRF Score = (1/60+2 ) + (1/60+1) = 0.03252 2. products BM25 Rank = 1, Semantic Rank = 3RRF Score = (1/60+1) + (1/60+3)= .03226 3. orders BM25 Rank = 3, Semantic Rank = Not RankedRRF Score = (1/60+3)= 0.01587 4. financials BM25 Rank = Not Ranked, Semantic Rank = 2RRF Score = (1/60+2)=0.01613 5. Sort by RRF score sales_data (highest score due to top rank in semantic search).products (high score from BM25).orders (lower relevance overall).financials (limited overlap). Final output: ['sales_data', 'products,' 'financials,' 'orders'] Tables retrieved using Keyword Table mapping are always included. Python from rank_bm25 import BM25Okapi def hybrid_search(query): # Keyword-based mapping keyword_to_table = { "sales": "sales_data", "product": "products", } keyword_results = [table for keyword, table in keyword_to_table.items() if keyword in query.lower()] # BM25 Search bm25 = BM25Okapi(["sales_data", "products", "orders", "financials"]) bm25_results = bm25.get_top_n(query.split(), bm25.corpus, n=5) # Semantic Search semantic_results = vector_store.similarity_search(query, k=5) # Reciprocal Rank Fusion def reciprocal_rank_fusion(results): rank_map = {} for rank, table in enumerate(results): rank_map[table] = rank_map.get(table, 0) + 1 / (1 + rank) return sorted(rank_map, key=rank_map.get, reverse=True) combined_results = reciprocal_rank_fusion(bm25_results + semantic_results) return list(set(keyword_results + combined_results)) hybrid_search_tool = Tool( name="Hybrid Search", func=hybrid_search, description="Combines keyword mapping, BM25, and semantic search with RRF for table retrieval." ) 4. Re-Ranking Tool This tool ensures the most relevant tables are prioritized. Example Input tables: ["sales_data," "products," "financials"]Re-ranking logic For each table, compute a relevance score by concatenating the query and the table description.Sort by relevance score. Output: ["sales_data," "products"] A little more into the Re- ranking logic: The cross-encoder calculates a relevance score by analyzing the concatenated query and table description as a single input pair. This process involves: Pair input. The query and each table description are paired and passed as input to the cross-encoder.Joint encoding. Unlike separate encoders (e.g., bi-encoders), the cross-encoder jointly encodes the pair, allowing it to better capture context and dependencies between the query and the table description.Scoring. The model outputs a relevance score for each pair, indicating how well the table matches the query. Python from transformers import pipeline reranker = pipeline("text-classification", model="cross-encoder/ms-marco-TinyBERT-L-2") def re_rank_context(query, results): scores = [(doc, reranker(query + " " + doc)[0]['score']) for doc in results] return [doc for doc, score in sorted(scores, key=lambda x: x[1], reverse=True)] re_rank_tool = Tool( name="Re-Ranker", func=re_rank_context, description="Re-ranks the retrieved tables based on relevance to the query." ) 5. Prompt Building Tool This tool constructs a detailed prompt for the language model, incorporating the user’s refined query, retrieved schema, and examples from the Few-Shot Prompting Tool. Assume you are someone who is proficient in generating SQL queries. Generate an SQL query to: Retrieve total sales grouped by product for the last 7 days. Relevant tables: sales_data: Contains columns [sales, date, product_id].products: Contains columns [product_id, product_name]. Example SQL: Plain Text SELECT product_name, SUM(sales) FROM sales_data JOIN products ON sales_data.product_id = products.product_id GROUP BY product_name; Future Scope While this system uses a single agent with multiple tools to simplify modularity and reduce complexity, a multi-agent framework could be explored in the future. We could possibly explore the following: Dedicated agents for context retrieval. Separate agents for semantic and keyword searches.Task-specific agents. Agents specialized in SQL validation or optimization.Collaboration between agents. Using a coordination agent to manage task delegation. This approach could enhance scalability and allow for more sophisticated workflows, especially in enterprise-level deployments. Conclusion Agentic RAG for text-to-SQL applications offers a scalable, modular approach to solving structured query tasks. By incorporating hybrid search, re-ranking, few-shot prompting, and dynamic prompt construction within a single-agent framework, this system ensures accuracy, transparency, and extensibility. This enhanced workflow demonstrates a powerful blueprint for turning natural language questions into actionable SQL queries.
February 21, 2025
by
CORE
Implementing SOLID Principles in Android Development
February 20, 2025 by
How Open Source Can Elevate Your Career as a Software Engineer
February 20, 2025
by
CORE
Designing a Blog Application Using Document Databases
February 21, 2025 by
Controlling Access to Google BigQuery Data
February 21, 2025 by
A Comprehensive Guide to Generative AI Training
February 21, 2025 by
Controlling Access to Google BigQuery Data
February 21, 2025 by
Hexagonal Architecture: A Lyrics App Example Using Java
February 21, 2025 by
Deduplication of Videos Using Fingerprints, CLIP Embeddings
February 21, 2025 by
Beating the 100-Scheduled-Job Limit in Salesforce
February 21, 2025 by
Hexagonal Architecture: A Lyrics App Example Using Java
February 21, 2025 by
Terraform State File: Key Challenges and Solutions
February 21, 2025 by
Deduplication of Videos Using Fingerprints, CLIP Embeddings
February 21, 2025 by
Terraform State File: Key Challenges and Solutions
February 21, 2025 by
Hexagonal Architecture: A Lyrics App Example Using Java
February 21, 2025 by
A Comprehensive Guide to Generative AI Training
February 21, 2025 by
Deduplication of Videos Using Fingerprints, CLIP Embeddings
February 21, 2025 by