Java

Java, Spring and Web Development tutorials

Avoiding “no runnable methods” Error in JUnit

  • Testing
  • JUnit
  • JUnit 5

Learn how to avoid the "no runnable methods" error when running JUnit tests.       

1. Overview JUnit is the primary choice for unit testing in Java. During the test execution, developers often face a strange error that says there are no runnable methods even when we’ve imported the correct classes. In this tutorial, we’ll see some specific cases resulting in this error and how to fix them. 2. Missing @Test Annotation First, the test engine must recognize the test class to execute the tests. If there are no valid tests to run, we’ll get an exception: java.lang.Exception: No runnable methods To avoid this, we need to ensure that the test class is always annotated with the @Test annotation from the JUnit library. For JUnit 4.x, we should use: import org.junit.Test; On the other hand, if our testing library is JUnit 5.x, we should import the packages from JUnit Jupiter: import org.junit.jupiter.api.Test; In addition, we should pay special attention to the TestNG framework’s @Test annotation: import org.testng.annotations.Test; When we import this class in place of JUnit’s @Test annotation, it can cause a “no runnable methods” error. 3. Mixing JUnit 4 and JUnit 5 Some legacy projects may have both JUnit 4 and JUnit 5 libraries in the classpath. Though the compiler won’t report any errors when we mix both libraries, we may face the “no runnable methods” error when running the JUnit. Let’s look at some cases. 3.1. Wrong JUnit 4 Imports This happens mostly due to the auto-import feature of IDEs that imports the first matching class. Let’s look at the right JUnit 4 imports: import org.junit.After; import org.junit.AfterClass; import org.junit.Before; import org.junit.BeforeClass; import org.junit.Test; import static org.junit.Assert.*; As seen above, we can notice that the org.junit package contains the core classes of JUnit 4. 3.2. Wrong JUnit 5 Imports Similarly, we may import the JUnit 4 classes by mistake instead of JUnit 5 and tests wouldn’t run. So, we need to ensure that we import the right classes for JUnit 5: import org.junit.jupiter.api.AfterEach; import org.junit.jupiter.api.AfterAll; import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.BeforeAll; import org.junit.jupiter.api.Test; import static org.junit.jupiter.api.Assertions.*; Here, we can see that the core classes of JUnit 5 belongs to org.junit.jupiter.api package. 3.3. @RunWith and @ExtendWith Annotations For projects that use Spring Framework, we need to import a special annotation for integration tests. For JUnit 4, we use the @RunWith annotation to load the Spring TestContext Framework. However, for the tests written on JUnit 5, we should use the @ExtendWith annotation to get the same behaviour. When we interchange these two annotations with different JUnit versions, the test engine may not find these tests. In addition, to execute JUnit 5 tests for Spring Boot based applications, we can use the @SpringBootTest annotation that provides additional features on top of @ExtendWith annotation. 4. Test Utility Classes Utility classes are useful when we want to reuse the same code across different classes. Accordingly, the test utility classes or parent classes share some common setup or initialization methods. Due to the class naming the test engine recognizes these classes as real test classes and tries to find testable methods. Let’s have a look at a utility class: public class NameUtilTest { public String formatName(String name) { return (name == null) ? name : name.replace("$", "_"); } } In this case, we can observe that the NameUtilTest class matches the naming convention for the real test classes. However, there are no methods annotated with @Test which results in a “no runnable methods” error. To avoid this scenario, we can reconsider the naming of these utility classes. As such, the utility classes that end with a “*Test” can be renamed as “*TestHelper” or similar: public class NameUtilTestHelper { public String formatName(String name) { return (name == null) ? name : name.replace("$", "_"); } } Alternatively, we can specify the abstract modifier for parent classes that end with the “Test” pattern (e.g., BaseTest) to prevent the class from test execution. 5. Explicitly Ignored Tests Though not a common scenario, sometimes all the test methods or the entire test class could have been incorrectly marked skippable. The @Ignore (JUnit 4) and @Disabled (JUnit 5) annotations can be useful to temporarily prevent certain tests from running. This could be a quick fix to get the build back on track when the test fix is complex or we need urgent deployments: public class JUnit4IgnoreUnitTest { @Ignore @Test public void whenMethodIsIgnored_thenTestsDoNotRun() { Assert.assertTrue(true); } } In the above case, the enclosing JUnit4IgnoreUnitTest class has just one method and that was marked as @Ignore. When we run tests, either with an IDE or a Maven build, this might result in a “no runnable methods” error as there’s no testable method for the Test class. To avoid this error, it’s better to remove the @Ignore annotation or have at least one valid test method for execution. 6. Conclusion In this article, we’ve seen a few cases where we’d get a “no runnable methods” error while running tests in JUnit and how to fix each case. Firstly, we saw that missing the right @Test annotation can cause this error. Secondly, we learned that mixing classes from JUnit 4 and JUnit 5 can lead to the same situation. We also observed the best way to name test utility classes. Finally, we discussed explicitly ignored tests and how they can be an issue. As always, the code presented in this article is available over on GitHub.

Create a ChatGPT Like Chatbot With Ollama and Spring AI

  • Spring
  • Spring AI

Explore building a simple help desk Agent API using Spring AI and Meta's llama3 via the Ollama library.       

1. Introduction In this tutorial, we’ll build a simple help desk Agent API using Spring AI and the llama3 Ollama. 2. What Are Spring AI and Ollama? Spring AI is the most recent module added to the Spring Framework ecosystem. Along with various features, it allows us to interact easily with various Large Language Models (LLM) using chat prompts. Ollama is an open-source library that serves some LLMs. One is Meta’s llama3, which we’ll use for this tutorial. 3. Implementing a Help Desk Agent Using Spring AI Let’s illustrate the use of Spring AI and Ollama together with a demo help desk chatbot. The application works similarly to a real help desk agent, helping users troubleshoot internet connection problems. In the following sections, we’ll configure the LLM and Spring AI dependencies and create the REST endpoint that chats with the help desk agent. 3.1. Configuring Ollama and Llama3 To start using Spring AI and Ollama, we need to set up the local LLM. For this tutorial, we’ll use Meta’s llama3. Therefore, let’s first install Ollama. Using Linux, we can run the command: curl -fsSL https://ollama.com/install.sh | sh In Windows or MacOS machines, we can download and install the executable from the Ollama website. After Ollama installation, we can run llama3: ollama run llama3 With that, we have llama3 running locally. 3.2. Creating the Basic Project Structure Now, we can configure our Spring application to use the Spring AI module. Let’s start by adding the spring milestones repository: <repositories> <repository> <id>spring-milestones</id> <name>Spring Milestones</name> <url>https://repo.spring.io/milestone</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> Then, we can add the spring-ai-bom: <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.ai</groupId> <artifactId>spring-ai-bom</artifactId> <version>1.0.0-M1</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> Finally, we can add the spring-ai-ollama-spring-boot-starter dependency: <dependency> <groupId>org.springframework.ai</groupId> <artifactId>spring-ai-ollama-spring-boot-starter</artifactId> <version>1.0.0-M1</version> </dependency> With the dependencies set, we can configure our application.yml to use the necessary configuration: spring: ai: ollama: base-url: http://localhost:11434 chat: options: model: llama3 With that, Spring will start the llama3 model at port 11434. 3.3. Creating the Help Desk Controller In this section, we’ll create the web controller to interact with the help desk chabot. Firstly, let’s create the HTTP request model: public class HelpDeskRequest { @JsonProperty("prompt_message") String promptMessage; @JsonProperty("history_id") String historyId; // getters, no-arg constructor } The promptMessage field represents the user input message for the model. Additionally, historyId uniquely identifies the current conversation. Further, in this tutorial, we’ll use that field to make the LLM remember the conversational history. Secondly, let’s create the response model: public class HelpDeskResponse { String result; // all-arg constructor } Finally, we can create the help desk controller class: @RestController @RequestMapping("/helpdesk") public class HelpDeskController { private final HelpDeskChatbotAgentService helpDeskChatbotAgentService; // all-arg constructor @PostMapping("/chat") public ResponseEntity<HelpDeskResponse> chat(@RequestBody HelpDeskRequest helpDeskRequest) { var chatResponse = helpDeskChatbotAgentService.call(helpDeskRequest.getPromptMessage(), helpDeskRequest.getHistoryId()); return new ResponseEntity<>(new HelpDeskResponse(chatResponse), HttpStatus.OK); } } In the HelpDeskController, we define a POST /helpdesk/chat and return what we got from the injected ChatbotAgentService. In the following sections, we’ll dive into that service. 3.4. Calling the Ollama Chat API To start interacting with llama3, let’s create the HelpDeskChatbotAgentService class with the initial prompt instructions: @Service public class HelpDeskChatbotAgentService { private static final String CURRENT_PROMPT_INSTRUCTIONS = """ Here's the `user_main_prompt`: """; } Then, let’s also add the general instructions message: private static final String PROMPT_GENERAL_INSTRUCTIONS = """ Here are the general guidelines to answer the `user_main_prompt` You'll act as Help Desk Agent to help the user with internet connection issues. Below are `common_solutions` you should follow in the order they appear in the list to help troubleshoot internet connection problems: 1. Check if your router is turned on. 2. Check if your computer is connected via cable or Wi-Fi and if the password is correct. 3. Restart your router and modem. You should give only one `common_solution` per prompt up to 3 solutions. Do no mention to the user the existence of any part from the guideline above. """; That message tells the chatbot how to answer the user’s internet connection issues. Finally, let’s add the rest of the service implementation: private final OllamaChatModel ollamaChatClient; // all-arg constructor public String call(String userMessage, String historyId) { var generalInstructionsSystemMessage = new SystemMessage(PROMPT_GENERAL_INSTRUCTIONS); var currentPromptMessage = new UserMessage(CURRENT_PROMPT_INSTRUCTIONS.concat(userMessage)); var prompt = new Prompt(List.of(generalInstructionsSystemMessage, contextSystemMessage, currentPromptMessage)); var response = ollamaChatClient.call(prompt).getResult().getOutput().getContent(); return response; } The call() method first creates one SystemMessage and one UserMessage. System messages represent instructions we give internally to the LLM, like general guidelines. In our case, we provided instructions on how to chat with the user with internet connection issues. On the other hand, user messages represent the API external client’s input. With both messages, we can create a Prompt object, call ollamaChatClient‘s call(), and get the response from the LLM. 3.5. Keeping the Conversational History In general, most LLMs are stateless. Thus, they don’t store the current state of the conversation. In other words, they don’t remember previous messages from the same conversation. Therefore, the help desk agent might provide instructions that didn’t work previously and anger the user. To implement LLM memory, we can store each prompt and response using historyId and append the complete conversational history into the current prompt before sending it. To do that, let’s first create a prompt in the service class with system instructions to follow the conversational history properly: private static final String PROMPT_CONVERSATION_HISTORY_INSTRUCTIONS = """ The object `conversational_history` below represents the past interaction between the user and you (the LLM). Each `history_entry` is represented as a pair of `prompt` and `response`. `prompt` is a past user prompt and `response` was your response for that `prompt`. Use the information in `conversational_history` if you need to recall things from the conversation , or in other words, if the `user_main_prompt` needs any information from past `prompt` or `response`. If you don't need the `conversational_history` information, simply respond to the prompt with your built-in knowledge. `conversational_history`: """; Now, let’s create a wrapper class to store conversational history entries: public class HistoryEntry { private String prompt; private String response; //all-arg constructor @Override public String toString() { return String.format(""" `history_entry`: `prompt`: %s `response`: %s ----------------- \n """, prompt, response); } } The above toString() method is essential to format the prompt correctly. Then, we also need to define one in-memory storage for the history entries in the service class: private final static Map<String, List<HistoryEntry>> conversationalHistoryStorage = new HashMap<>(); Finally, let’s modify the service call() method also to store the conversational history: public String call(String userMessage, String historyId) { var currentHistory = conversationalHistoryStorage.computeIfAbsent(historyId, k -> new ArrayList<>()); var historyPrompt = new StringBuilder(PROMPT_CONVERSATION_HISTORY_INSTRUCTIONS); currentHistory.forEach(entry -> historyPrompt.append(entry.toString())); var contextSystemMessage = new SystemMessage(historyPrompt.toString()); var generalInstructionsSystemMessage = new SystemMessage(PROMPT_GENERAL_INSTRUCTIONS); var currentPromptMessage = new UserMessage(CURRENT_PROMPT_INSTRUCTIONS.concat(userMessage)); var prompt = new Prompt(List.of(generalInstructionsSystemMessage, contextSystemMessage, currentPromptMessage)); var response = ollamaChatClient.call(prompt).getResult().getOutput().getContent(); var contextHistoryEntry = new HistoryEntry(userMessage, response); currentHistory.add(contextHistoryEntry); return response; } Firstly, we get the current context, identified by historyId, or create a new one using computeIfAbsent(). Secondly, we append each HistoryEntry from the storage into a StringBuilder and pass it to a new SystemMessage to pass to the Prompt object. Finally, the LLM will process a prompt containing all the information about past messages in the conversation. Therefore, the help desk chatbot remembers which solutions the user has already tried. 4. Testing a Conversation With everything set, let’s try interacting with the prompt from the end-user perspective. Let’s first start the Spring Boot application on port 8080 to do that. With the application running, we can send a cURL with a generic message about internet issues and a history_id: curl --location 'http://localhost:8080/helpdesk/chat' \ --header 'Content-Type: application/json' \ --data '{ "prompt_message": "I can't connect to my internet", "history_id": "1234" }' For that interaction, we get a response similar to this: { "result": "Let's troubleshoot this issue! Have you checked if your router is turned on?" } Let’s keep asking for a solution: { "prompt_message": "I'm still having internet connection problems", "history_id": "1234" } The agent responds with a different solution: { "result": "Let's troubleshoot this further! Have you checked if your computer is connected via cable or Wi-Fi and if the password is correct?" } Moreover, the API stores the conversational history. Let’s ask the agent again: { "prompt_message": "I tried your alternatives so far, but none of them worked", "history_id": "1234" } It comes with a different solution: { "result": "Let's think outside the box! Have you considered resetting your modem to its factory settings or contacting your internet service provider for assistance?" } This was the last alternative we provided in the guidelines prompt, so the LLM won’t give helpful responses afterward. For even better responses, we can improve the prompts we tried by providing more alternatives for the chatbot or improving the internal system message using Prompt Engineering techniques. 5. Conclusion In this article, we implemented an AI help desk Agent to help our customers troubleshoot internet connection issues. Additionally, we saw the difference between user and system messages, how to build the prompt with the conversation history, and then call the llama3 LLM. As always, the code is available over on GitHub.

Calculate the Sum of Diagonal Values in a 2d Java Array

  • Algorithms
  • Java Array
  • Math

Learn about how to compute sums of the main and secondary diagonals in a two-dimensional array in Java.       

1. Overview Working with two-dimensional arrays (2D arrays) is common in Java, especially for tasks that involve matrix operations. One such task is calculating the sum of the diagonal values in a 2D array. In this tutorial, we’ll explore different approaches to summing the values of the main and secondary diagonals in a 2D array. 2. Introduction to the Problem First, let’s quickly understand the problem. A 2D array forms a matrix. As we need to sum the elements on the diagonals, we assume the matrix is n x n, for example, a 4 x 4 2D array: static final int[][] MATRIX = new int[][] { { 1, 2, 3, 4 }, { 5, 6, 7, 8 }, { 9, 10, 11, 12 }, { 13, 14, 15, 100 } }; Next, let’s clarify what we mean by the main diagonal and the secondary diagonal: Main Diagonal – The diagonal runs from the top-left to the bottom-right of the matrix. For example, in the example above, the elements on the main diagonal are 1, 6, 11, and 100 Secondary Diagonal – The diagonal runs from the top-right to the bottom-left. In the same example, 4, 7, 10, and 13 belong to the secondary diagonal. The sum of both diagonal values are following: static final int SUM_MAIN_DIAGONAL = 118; //1+6+11+100 static final int SUM_SECONDARY_DIAGONAL = 34; //4+7+10+13 Since we want to create methods to cover both diagonal types, let’s create an Enum for them: enum DiagonalType { Main, Secondary } Later, we can pass a DiagonalType to our solution method to get the corresponding result. 3. Identifying Elements on a Diagonal To calculate the sum of diagonal values, we must first identify those elements on a diagonal. In the main diagonal case, it’s pretty straightforward. When an element’s row-index (rowIdx) and column-index (colIdx) are equal, the element is on the main diagonal, such as MATRIX[0][0] = 1, MATRIX[1][1] = 6, and MATRIX[3][3] = 100. On the other hand, given a n x n matrix, if an element is on the secondary diagonal, we have rowIdx + colIdx = n – 1. For instance, in our 4 x 4 matrix example, MATRIX[0][3] = 4 (0 + 3 = 4 -1), MATRIX[1][2] = 7 (1 + 2 = 4 – 1), and MATRIX[3][0] = 13 (3 + 0 = 4 -1 ). So, we have colIdx = n – rowIdx – 1. Now that we understand the rule of diagonal elements, let’s create methods to calculate the sums. 4. The Loop Approach A straightforward approach is looping through row indexes, depending on the required diagonal type, summing the elements: int diagonalSumBySingleLoop(int[][] matrix, DiagonalType diagonalType) { int sum = 0; int n = matrix.length; for (int rowIdx = 0; rowIdx < n; row++) { int colIdx = diagonalType == Main ? rowIdx : n - rowIdx - 1; sum += matrix[rowIdx][colIdx]; } return sum; } As we can see in the implementation above, we calculate the required colIdx depending on the given diagonalType, and then add the element on rowIdx and colIdx to the sum variable. Next, let’s test whether this solution produces the expected results: assertEquals(SUM_MAIN_DIAGONAL, diagonalSumBySingleLoop(MATRIX, Main)); assertEquals(SUM_SECONDARY_DIAGONAL, diagonalSumBySingleLoop(MATRIX, Secondary)); It turns out this method sums correct values for both diagonal types. 5. DiagonalType with an IntBinaryOperator Object The loop-based solution is straightforward. However, in each loop step, we must check the diagonalType instance to determine colIdx, although diagonalType is a parameter that won’t change during the loop. Next, let’s see if we can improve it a bit. One idea is to assign each DiagonalType instance an IntBinaryOperator object so that we can calculate colIdx without checking which diagonal type we have: enum DiagonalType { Main((rowIdx, len) -> rowIdx), Secondary((rowIdx, len) -> (len - rowIdx - 1)); public final IntBinaryOperator colIdxOp; DiagonalType(IntBinaryOperator colIdxOp) { this.colIdxOp = colIdxOp; } } As the code above shows, we added an IntBinaryOperator property to the DiagonalType Enum. IntBinaryOperation is a functional interface that takes two int arguments and produces an int result. In this example, we use two lambda expressions as the Enum instances’ IntBinaryOperator objects. Now, we can remove the ternary operation of the diagonal type checking in the for loop: int diagonalSumFunctional(int[][] matrix, DiagonalType diagonalType) { int sum = 0; int n = matrix.length; for (int rowIdx = 0; rowIdx < n; row++) { sum += matrix[rowIdx][diagonalType.colIdxOp.applyAsInt(rowIdx, n)]; } return sum; } As we can see, we can directly invoke diagonalType’s colIdxOp function by calling applyAsInt() to get the required colIdx. Of course, the test still passes: assertEquals(SUM_MAIN_DIAGONAL, diagonalSumFunctional(MATRIX, Main)); assertEquals(SUM_SECONDARY_DIAGONAL, diagonalSumFunctional(MATRIX, Secondary)); 6. Using Stream API Functional interfaces were introduced in Java 8. Another significant feature Java 8 brought is Stream API. Next, let’s solve the problem using these two Java 8 features: public int diagonalSumFunctionalByStream(int[][] matrix, DiagonalType diagonalType) { int n = matrix.length; return IntStream.range(0, n) .map(i -> MATRIX[i][diagonalType.colIdxOp.applyAsInt(i, n)]) .sum(); } In this example, we replace the for-loop with IntStream.range(). Also, map() is responsible for transforming each index (i) to the required elements on the diagonal. Then, sum() produces the result. Finally, this solution passes the test as well: assertEquals(SUM_MAIN_DIAGONAL, diagonalSumFunctionalByStream(MATRIX, Main)); assertEquals(SUM_SECONDARY_DIAGONAL, diagonalSumFunctionalByStream(MATRIX, Secondary)); This approach is fluent and easier to read than the initial loop-based solution. 7. Conclusion In this article, we’ve explored different ways to calculate the sum of diagonal values in a 2D Java array. Understanding the indexing for the main and secondary diagonals is the key to solving the problem. As always, the complete source code for the examples is available over on GitHub.

Introduction to MyBatis-Plus

  • Persistence
  • MyBatis

Learn about the MyBatis-Plus persistence framework as an alternative to JDBC and ORM tools such as JPA/Hibernate.       

1. Introduction MyBatis is a popular open-source persistence framework that provides alternatives to JDBC and Hibernate. In this article, we’ll discuss an extension over MyBatis called MyBatis-Plus, loaded with many handy features offering rapid development and better efficiency. 2. MyBatis-Plus Setup 2.1. Maven Dependency First, let’s add the following Maven dependency to our pom.xml. <dependency> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus-spring-boot3-starter</artifactId> <version>3.5.7</version> </dependency> The latest version of the Maven dependency can be found here. Since this is the Spring Boot 3-based Maven dependency, we’ll also be required to add the spring-boot-starter dependency to our pom.xml. Alternatively, we can add the following dependency when using Spring Boot 2: <dependency> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus-boot-starter</artifactId> <version>3.5.7</version> </dependency> Next, we’ll add the H2 dependency to our pom.xml for an in-memory database to verify the features and capabilities of MyBatis-Plus. <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <version>2.3.230</version> </dependency> Similarly, find the latest version of the Maven dependency here. We can also use MySQL for the integration. 2.2. Client Once our setup is ready, let’s create the Client entity with a few properties like id, firstName, lastName, and email: @TableName("client") public class Client { @TableId(type = IdType.AUTO) private Long id; private String firstName; private String lastName; private String email; // getters and setters ... } Here, we’ve used MyBatis-Plus’s self-explanatory annotations like @TableName and @TableId for quick integration with the client table in the underlying database. 2.3. ClientMapper Then, we’ll create the mapper interface for the Client entity – ClientMapper that extends the BaseMapper interface provided by MyBatis-Plus: @Mapper public interface ClientMapper extends BaseMapper<Client> { } The BaseMapper interface provides numerous default methods like insert(), selectOne(), updateById(), insertOrUpdate(), deleteById(), and deleteByIds() for CRUD operations. 2.4. ClientService Next, let’s create the ClientService service interface extending the IService interface: public interface ClientService extends IService<Client> { } The IService interface encapsulates the default implementations of CRUD operations and uses the BaseMapper interface to offer simple and maintainable basic database operations. 2.5. ClientServiceImpl Last, we’ll create the ClientServiceImpl class: @Service public class ClientServiceImpl extends ServiceImpl<ClientMapper, Client> implements ClientService { @Autowired private ClientMapper clientMapper; } It’s the service implementation for the Client entity, injected with the ClientMapper dependency. 3. CRUD Operations 3.1. Create Now that we’ve all the utility interfaces and classes ready, let’s use the ClientService interface to create the Client object: Client client = new Client(); client.setFirstName("Anshul"); client.setLastName("Bansal"); client.setEmail("anshul.bansal@example.com"); clientService.save(client); assertNotNull(client.getId()); We can observe the following logs when saving the client object once we set the logging level to DEBUG for the package com.baeldung.mybatisplus: 16:07:57.404 [main] DEBUG c.b.m.mapper.ClientMapper.insert - ==> Preparing: INSERT INTO client ( first_name, last_name, email ) VALUES ( ?, ?, ? ) 16:07:57.414 [main] DEBUG c.b.m.mapper.ClientMapper.insert - ==> Parameters: Anshul(String), Bansal(String), anshul.bansal@example.com(String) 16:07:57.415 [main] DEBUG c.b.m.mapper.ClientMapper.insert - <== Updates: 1 The logs generated by the ClientMapper interface show the insert query with the parameters and final number of rows inserted in the database. 3.2. Read Next, let’s check out a few handy read methods like getById() and list(): assertNotNull(clientService.getById(2)); assertEquals(6, clientService.list()) Similarly, we can observe the following SELECT statement in the logs: 16:07:57.423 [main] DEBUG c.b.m.mapper.ClientMapper.selectById - ==> Preparing: SELECT id,first_name,last_name,email,creation_date FROM client WHERE id=? 16:07:57.423 [main] DEBUG c.b.m.mapper.ClientMapper.selectById - ==> Parameters: 2(Long) 16:07:57.429 [main] DEBUG c.b.m.mapper.ClientMapper.selectById - <== Total: 1 16:07:57.437 [main] DEBUG c.b.m.mapper.ClientMapper.selectList - ==> Preparing: SELECT id,first_name,last_name,email FROM client 16:07:57.438 [main] DEBUG c.b.m.mapper.ClientMapper.selectList - ==> Parameters: 16:07:57.439 [main] DEBUG c.b.m.mapper.ClientMapper.selectList - <== Total: 6 Also, the MyBatis-Plus framework comes with a few handy wrapper classes like QueryWrapper, LambdaQueryWrapper, and QueryChainWrapper: Map<String, Object> map = Map.of("id", 2, "first_name", "Laxman"); QueryWrapper<Client> clientQueryWrapper = new QueryWrapper<>(); clientQueryWrapper.allEq(map); assertNotNull(clientService.getBaseMapper().selectOne(clientQueryWrapper)); LambdaQueryWrapper<Client> lambdaQueryWrapper = new LambdaQueryWrapper<>(); lambdaQueryWrapper.eq(Client::getId, 3); assertNotNull(clientService.getBaseMapper().selectOne(lambdaQueryWrapper)); QueryChainWrapper<Client> queryChainWrapper = clientService.query(); queryChainWrapper.allEq(map); assertNotNull(clientService.getBaseMapper().selectOne(queryChainWrapper.getWrapper())); Here, we’ve used the getBaseMapper() method of the ClientService interface to utilize the wrapper classes to let us write complex queries intuitively. 3.3. Update Then, let’s take a look at a few ways to execute the updates: Client client = clientService.getById(2); client.setEmail("anshul.bansal@baeldung.com"); clientService.updateById(client); assertEquals("anshul.bansal@baeldung.com", clientService.getById(2).getEmail()); Follow the console to check out the following logs: 16:07:57.440 [main] DEBUG c.b.m.mapper.ClientMapper.updateById - ==> Preparing: UPDATE client SET email=? WHERE id=? 16:07:57.441 [main] DEBUG c.b.m.mapper.ClientMapper.updateById - ==> Parameters: anshul.bansal@baeldung.com(String), 2(Long) 16:07:57.441 [main] DEBUG c.b.m.mapper.ClientMapper.updateById - <== Updates: 1 Similarly, we can use the LambdaUpdateWrapper class to update the Client objects: LambdaUpdateWrapper<Client> lambdaUpdateWrapper = new LambdaUpdateWrapper<>(); lambdaUpdateWrapper.set(Client::getEmail, "x@e.com"); assertTrue(clientService.update(lambdaUpdateWrapper)); QueryWrapper<Client> clientQueryWrapper = new QueryWrapper<>(); clientQueryWrapper.allEq(Map.of("email", "x@e.com")); assertThat(clientService.list(clientQueryWrapper).size()).isGreaterThan(5); Once the client objects are updated, we use the QueryWrapper class to confirm the operation. 3.4. Delete Similarly, we can use the removeById() or removeByMap() methods to delete the records: clientService.removeById(1); assertNull(clientService.getById(1)); Map<String, Object> columnMap = new HashMap<>(); columnMap.put("email", "x@e.com"); clientService.removeByMap(columnMap); assertEquals(0, clientService.list().size()); The logs for the delete operation would look like this: 21:55:12.938 [main] DEBUG c.b.m.mapper.ClientMapper.deleteById - ==> Preparing: DELETE FROM client WHERE id=? 21:55:12.938 [main] DEBUG c.b.m.mapper.ClientMapper.deleteById - ==> Parameters: 1(Long) 21:55:12.938 [main] DEBUG c.b.m.mapper.ClientMapper.deleteById - <== Updates: 1 21:57:14.278 [main] DEBUG c.b.m.mapper.ClientMapper.delete - ==> Preparing: DELETE FROM client WHERE (email = ?) 21:57:14.286 [main] DEBUG c.b.m.mapper.ClientMapper.delete - ==> Parameters: x@e.com(String) 21:57:14.287 [main] DEBUG c.b.m.mapper.ClientMapper.delete - <== Updates: 5 Similar to the update logs, these logs show the delete query with the parameters and total rows deleted from the database. 4. Extra Features Let’s discuss a few handy features available in MyBatis-Plus as extensions over MyBatis. 4.1. Batch Operations First and foremost is the ability to perform common CRUD operations in batches thereby improving performance and efficiency: Client client2 = new Client(); client2.setFirstName("Harry"); Client client3 = new Client(); client3.setFirstName("Ron"); Client client4 = new Client(); client4.setFirstName("Hermione"); // create in batches clientService.saveBatch(Arrays.asList(client2, client3, client4)); assertNotNull(client2.getId()); assertNotNull(client3.getId()); assertNotNull(client4.getId()); Likewise, let’s check out the logs to see the batch inserts in action: 16:07:57.419 [main] DEBUG c.b.m.mapper.ClientMapper.insert - ==> Preparing: INSERT INTO client ( first_name ) VALUES ( ? ) 16:07:57.419 [main] DEBUG c.b.m.mapper.ClientMapper.insert - ==> Parameters: Harry(String) 16:07:57.421 [main] DEBUG c.b.m.mapper.ClientMapper.insert - ==> Parameters: Ron(String) 16:07:57.421 [main] DEBUG c.b.m.mapper.ClientMapper.insert - ==> Parameters: Hermione(String) Also, we’ve got methods like updateBatchById(), saveOrUpdateBatch(), and removeBatchByIds() to perform save, update, or delete operations for a collection of objects in batches. 4.2. Pagination MyBatis-Plus framework offers an intuitive way to paginate the query results. All we need is to declare the MyBatisPlusInterceptor class as a Spring Bean and add the PaginationInnerInterceptor class defined with database type as an inner interceptor: @Configuration public class MyBatisPlusConfig { @Bean public MybatisPlusInterceptor mybatisPlusInterceptor() { MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor(); interceptor.addInnerInterceptor(new PaginationInnerInterceptor(DbType.H2)); return interceptor; } } Then, we can use the Page class to paginate the records. For instance, let’s fetch the second page with three results: Page<Client> page = Page.of(2, 3); clientService.page(page, null).getRecords(); assertEquals(3, clientService.page(page, null).getRecords().size()); So, we can observe the following logs for the above operation: 16:07:57.487 [main] DEBUG c.b.m.mapper.ClientMapper.selectList - ==> Preparing: SELECT id,first_name,last_name,email FROM client LIMIT ? OFFSET ? 16:07:57.487 [main] DEBUG c.b.m.mapper.ClientMapper.selectList - ==> Parameters: 3(Long), 3(Long) 16:07:57.488 [main] DEBUG c.b.m.mapper.ClientMapper.selectList - <== Total: 3 Likewise, these logs show the select query with the parameters and total rows selected from the database. 4.3. Streaming Query MyBatis-Plus offers support for streaming queries through methods like selectList(), selectByMap(), and selectBatchIds(), letting us process big data and meet the performance objectives. For instance, let’s check out the selectList() method available through the ClientService interface: clientService.getBaseMapper() .selectList(Wrappers.emptyWrapper(), resultContext -> assertNotNull(resultContext.getResultObject())); Here, we’ve used the getResultObject() method to get every record from the database. Likewise, we’ve got the getResultCount() method that returns the count of results being processed and the stop() method to halt the processing of the result set. 4.4. Auto-fill Being a fairly opinionated and intelligent framework, MyBatis-Plus also supports automatically filling fields for insert and update operations. For example, we can use the @TableField annotation to set the creationDate when inserting a new record and lastModifiedDate in the event of an update: public class Client { // ... @TableField(fill = FieldFill.INSERT) private LocalDateTime creationDate; @TableField(fill = FieldFill.UPDATE) private LocalDateTime lastModifiedDate; // getters and setters ... } Now, MyBatis-Plus will fill the creation_date and last_modified_date columns automatically with every insert and update query. 4.5. Logical Delete MyBatis-Plus framework offers a simple and efficient strategy to let us logically delete the record by flagging it in the database. We can enable the feature by using the @TableLogic annotation over the deleted property: @TableName("client") public class Client { // ... @TableLogic private Integer deleted; // getters and setters ... } Now, the framework will automatically handle the logical deletion of the records when performing database operations. So, let’s remove the Client object and try to read the same: clientService.removeById(harry); assertNull(clientService.getById(harry.getId())); Observe the following logs to check out the update query setting the value of the deleted property to 1 and using the 0 value while running the select query on the database: 15:38:41.955 [main] DEBUG c.b.m.mapper.ClientMapper.deleteById - ==> Preparing: UPDATE client SET last_modified_date=?, deleted=1 WHERE id=? AND deleted=0 15:38:41.955 [main] DEBUG c.b.m.mapper.ClientMapper.deleteById - ==> Parameters: null, 7(Long) 15:38:41.957 [main] DEBUG c.b.m.mapper.ClientMapper.deleteById - <== Updates: 1 15:38:41.957 [main] DEBUG c.b.m.mapper.ClientMapper.selectById - ==> Preparing: SELECT id,first_name,last_name,email,creation_date,last_modified_date,deleted FROM client WHERE id=? AND deleted=0 15:38:41.957 [main] DEBUG c.b.m.mapper.ClientMapper.selectById - ==> Parameters: 7(Long) 15:38:41.958 [main] DEBUG c.b.m.mapper.ClientMapper.selectById - <== Total: 0 Also, it’s possible to modify the default configuration through the application.yml: mybatis-plus: global-config: db-config: logic-delete-field: deleted logic-delete-value: 1 logic-not-delete-value: 0 The above configuration lets us to change the name of the delete field with delete and active values. 4.6. Code Generator MyBatis-Plus offers an automatic code generation feature to avoid manually creating redundant code like entity, mapper, and service interfaces. First, let’s add the latest mybatis-plus-generator dependency to our pom.xml: <dependency> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus-generator</artifactId> <version>3.5.7</version> </dependency> Also, we’ll require the support of a template engine like Velocity or Freemarker. Then, we can use MyBatis-Plus’s FastAutoGenerator class with the FreemarkerTemplateEngine class set as a template engine to connect the underlying database, scan all the existing tables, and generate the utility code: FastAutoGenerator.create("jdbc:h2:file:~/mybatisplus", "sa", "") .globalConfig(builder -> { builder.author("anshulbansal") .outputDir("../tutorials/mybatis-plus/src/main/java/") .disableOpenDir(); }) .packageConfig(builder -> builder.parent("com.baeldung.mybatisplus").service("ClientService")) .templateEngine(new FreemarkerTemplateEngine()) .execute(); Therefore, when the above program runs, it’ll generate the output files in the com.baeldung.mybatisplus package: List<String> codeFiles = Arrays.asList("src/main/java/com/baeldung/mybatisplus/entity/Client.java", "src/main/java/com/baeldung/mybatisplus/mapper/ClientMapper.java", "src/main/java/com/baeldung/mybatisplus/service/ClientService.java", "src/main/java/com/baeldung/mybatisplus/service/impl/ClientServiceImpl.java"); for (String filePath : codeFiles) { Path path = Paths.get(filePath); assertTrue(Files.exists(path)); } Here, we’ve asserted that the automatically generated classes/interfaces like Client, ClientMapper, ClientService, and ClientServiceImpl exist at the corresponding paths. 4.7. Custom ID Generator MyBatis-Plus framework allows implementing a custom ID generator using the IdentifierGenerator interface. For instance, let’s create the TimestampIdGenerator class and implement the nextId() method of the IdentifierGenerator interface to return the System’s nanoseconds: @Component public class TimestampIdGenerator implements IdentifierGenerator { @Override public Long nextId(Object entity) { return System.nanoTime(); } } Now, we can create the Client object setting the custom ID using the timestampIdGenerator bean: Client client = new Client(); client.setId(timestampIdGenerator.nextId(client)); client.setFirstName("Harry"); clientService.save(client); assertThat(timestampIdGenerator.nextId(harry)).describedAs( "Since we've used the timestampIdGenerator, the nextId value is greater than the previous Id") .isGreaterThan(harry.getId()); The logs will show the custom ID value generated by the TimestampIdGenerator class: 16:54:36.485 [main] DEBUG c.b.m.mapper.ClientMapper.insert - ==> Preparing: INSERT INTO client ( id, first_name, creation_date ) VALUES ( ?, ?, ? ) 16:54:36.485 [main] DEBUG c.b.m.mapper.ClientMapper.insert - ==> Parameters: 678220507350000(Long), Harry(String), null 16:54:36.485 [main] DEBUG c.b.m.mapper.ClientMapper.insert - <== Updates: 1 The long value of id shown in the parameters is the system time in nanoseconds. 4.8. DB Migration MyBatis-Plus offers an automatic mechanism to handle DDL migrations. We require to simply extend the SimpleDdl class and override the getSqlFiles() method to return a list of paths of SQL files containing the database migration statements: @Component public class DBMigration extends SimpleDdl { @Override public List<String> getSqlFiles() { return Arrays.asList("db/db_v1.sql", "db/db_v2.sql"); } } The underlying IdDL interface creates the ddl_history table to keep the history of DDL statements performed on the schema: CREATE TABLE IF NOT EXISTS `ddl_history` (`script` varchar(500) NOT NULL COMMENT '脚本',`type` varchar(30) NOT NULL COMMENT '类型',`version` varchar(30) NOT NULL COMMENT '版本',PRIMARY KEY (`script`)) COMMENT = 'DDL 版本' alter table client add column address varchar(255) alter table client add column deleted int default 0 Note: this feature works with most databases like MySQL and PostgreSQL, but not H2. 5. Conclusion In this introductory article, we’ve explored MyBatis-Plus – an extension over the popular MyBatis framework, loaded with many developer-friendly opinionated ways to perform CRUD operations on the database. Also, we’ve seen a few handy features like batch operations, pagination, ID generation, and DB migration. The complete code for this article is available over on GitHub.

Difference Between null and Empty Array in Java

  • Java Array
  • Data Structures
  • Java Null

Explore the difference between the null value and empty arrays in Java.       

1. Overview In this tutorial, we’ll explore the difference between null and empty arrays in Java. While they might sound similar, null and empty arrays have distinct behaviors and uses crucial for proper handling. Let’s explore how they work and why they matter. 2. null Array in Java A null array in Java indicates that the array reference doesn’t point to any object in memory. Java initializes reference variables, including arrays, to null by default unless we explicitly assign a value. If we attempt to access or manipulate a null array, it triggers a NullPointerException, a common error indicating an attempt to use an uninitialized object reference: @Test public void givenNullArray_whenAccessLength_thenThrowsNullPointerException() { int[] nullArray = null; assertThrows(NullPointerException.class, () -> { int length = nullArray.length; }); } In the test case above, we attempt to access the length of a null array and it results in a NullPointerException. The test case executes without failure, verifying that a NullPointerException was thrown. Proper handling of null arrays typically involves checking for null before performing any operations to avoid runtime exceptions. 3. Empty Array in Java An empty array in Java is an array that has been instantiated but contains zero elements. This means it’s a valid array object and can be used in operations, although it doesn’t hold any values. When we instantiate an empty array, Java allocates memory for the array structure but stores no elements. It’s important to note that when we create a non-empty array without specifying values for its elements, they default to zero-like values — 0 for an integer array, false for a boolean array, and null for an object array: @Test public void givenEmptyArray_whenCheckLength_thenReturnsZero() { int[] emptyArray = new int[0]; assertEquals(0, emptyArray.length); } The above test case executes successfully, demonstrating that an empty array has a zero-length and doesn’t cause any exceptions when accessed. Empty arrays are often used to initialize an array with a fixed size later or to signify that no elements are currently present. 4. Conclusion In this article, we have examined the distinctions between null and empty arrays in Java. A null array signifies that the array reference doesn’t point to any object, leading to potential NullPointerException errors if accessed without proper null checks. On the other hand, an empty array is a valid, instantiated array with no elements, providing a length of zero and enabling safe operations. The complete source code for these tests is available over on GitHub.

How to Read Text Inside Mail Body

  • Networking

Learn how to use the JavaMail API to connect to an email server, retrieve emails, and read the text inside the email body.       

1. Introduction In this tutorial, we’ll explore how to read text inside the body of an email using Java. We’ll use the JavaMail API to connect to an email server, retrieve emails, and read the text inside the email body. 2. Setting Up Before we begin, we need to add the jakarta.mail dependency into our pom.xml file: <dependency> <groupId>com.sun.mail</groupId> <artifactId>jakarta.mail-api</artifactId> <version>2.0.1</version> </dependency> The JavaMail API is a set of classes and interfaces that provide a framework for reading and sending email in Java. This library allows us to handle email-related tasks, such as connecting to email servers and reading email content. 3. Connecting to the Email Server To connect to the email server, we need to create a Session object, which acts as the mail session for our application. This session uses a Store object to establish a connection with the email server. Here’s how we set up the JavaMail API and connect to the email server: // Set up the JavaMail API Properties props = new Properties(); props.put("mail.smtp.host", "smtp.gmail.com"); props.put("mail.smtp.port", "587"); props.put("mail.smtp.auth", "true"); props.put("mail.smtp.starttls.enable", "true"); Session session = Session.getInstance(props, new Authenticator() { @Override protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication("your_email", "your_password"); } }); // Connect to the email server try (Store store = session.getStore("imaps")){ store.connect("imap.gmail.com", "your_email", "your_password"); // ... } catch (MessagingException e) { // handle exception } First, we configure properties for the mail session with details about the SMTP server, including host, port, authentication, and TLS settings. We then create a Session object using these properties and an Authenticator object that provides the email address and password for authentication. The Authenticator object is used to authenticate with the email server, and it returns a PasswordAuthentication object with the email address and password. Once we have the Session object, we can use it to connect to the email server using the getStore()method, which returns a Store object. We use try-with-resources to manage the Store object. This ensures that the store is closed automatically after we’re done using it. 4. Retrieving Emails After successfully connecting to the email server, the next step is to retrieve emails from the inbox. This involves using the Folder class to access the inbox folder and then fetching the emails contained within it. Here’s how we retrieve emails from the inbox folder: //... (same code as above to connect to email server) // Open the inbox folder try (Folder inbox = store.getFolder("inbox")){ inbox.open(Folder.READ_ONLY); // Retrieve emails from the inbox Message[] messages = inbox.getMessages(); } catch (MessagingException e) { // handle exception } We use the Store object to get a Folder instance representing the inbox. The getFolder(“inbox”) method accesses the inbox folder. We then open this folder in read-only mode using Folder.READ_ONLY, which allows us to read emails without making any changes. The getMessages() method fetches all the messages in the inbox folder. These messages are stored in an array of Message objects. 5. Reading Email Content Once we have the array of message objects, we can iterate through them to access each individual email. To read the content of each email, we need to use the Message class and its related classes, such as Multipart and BodyPart. Here’s an example of how to read the content of an email: void retrieveEmails() throws MessagingException { // ... connection and open inbox folder for (Message message : messages) { try { Object content = message.getContent(); if (content instanceof Multipart) { Multipart multipart = (Multipart) content; for (int i = 0; i < multipart.getCount(); i++) { BodyPart bodyPart = multipart.getBodyPart(i); if (bodyPart.getContentType().toLowerCase().startsWith("text/plain")) { plainContent = (String) bodyPart.getContent(); } else if (bodyPart.getContentType().toLowerCase().startsWith("text/html")) { // handle HTML content } else { // handle attachement } } } else { plainContent = (String) content; } } catch (IOException | MessagingException e) { // handle exception } } } In this example, we iterate through each Message object in the array and get its content using the getContent() method. This method returns an Object, which can be a String for plain text or a Multipart for emails with multiple parts. If the content is an instance of String, it indicates that the email is in plain text format. We can simply cast the content to String. Otherwise, if the content is a Multipart object, we need to handle each part separately. We use the getCount() method to iterate through the parts and process them accordingly. For each BodyPart in the Multipart, we check its content type using the getContentType() method. If the body part is a text part, we get its content using the getContent() method and check if it’s plain text or HTML content. We can then process the text content accordingly. Otherwise, we handle it as an attachment file. 6. Handling HTML Content In addition to plain text and attachments, email bodies can also contain HTML content. To handle HTML content, we can use a library such as Jsoup to parse the HTML and extract the text content. Here’s an example of how to handle HTML content using Jsoup: try (InputStream inputStream = bodyPart.getInputStream()) { String htmlContent = new String(inputStream.readAllBytes(), "UTF-8"); Document doc = Jsoup.parse(htmlContent); htmlContent = doc.text(); } catch (IOException e) { // Handle exception } In this example, we use Jsoup to parse the HTML content and extract the text content. We then process the text content as needed. 7. Nested MultiPart In JavaMail, it’s possible for a Multipart object to contain another Multipart object, which is known as a nested multipart message. To handle this scenario, we need to use recursion. This approach allows us to traverse the entire nested structure and extract text content from each part. First, we create a method to obtain the content of the Message object: String extractTextContent(Message message) throws MessagingException, IOException { Object content = message.getContent(); return getTextFromMessage(content); } Next, we create a method to process the content object. If the content is a Multipart, we iterate through each BodyPart and recursively extract the content from each part. Otherwise, if the content is in plain text we directly append the text to the StringBuilder: String getTextFromMessage(Object content) throws MessagingException, IOException { if (content instanceof Multipart) { Multipart multipart = (Multipart) content; StringBuilder text = new StringBuilder(); for (int i = 0; i < multipart.getCount(); i++) { BodyPart bodyPart = multipart.getBodyPart(i); text.append(getTextFromMessage(bodyPart.getContent())); } return text.toString(); } else if (content instanceof String) { return (String) content; } return ""; } 8. Testing In this section, we test the retrieveEmails() method by sending an email with three parts: plain text content and HTML content: In the test method, we retrieve the emails and validate that the plain text content and HTML content are correctly read and extracted from the email: EmailService es = new EmailService(session); es.retrieveEmails(); assertEquals("This is a text body", es.getPlainContent()); assertEquals("This is an HTML body", es.getHTMLContent()); 9. Conclusion In this tutorial, we’ve learned how to read text from email bodies using Java. We discussed setting up the JavaMail API, connecting to an email server, and extracting email content. As always, the source code for the examples is available over on GitHub.

How to Convert to and From a Stream and Two Dimensional Array in Java

  • Java Array
  • Java Streams

Learn how to effectively convert a 2D array to a stream of rows or a flat stream, and then reassemble them back into a 2D array.       

1. Overview Working with arrays and streams is a common task in Java, particularly when dealing with complex data structures. While 1D arrays and streams are straightforward, converting between 2D arrays and streams can be more involved. In this tutorial, we’ll walk through converting a 2D array to a stream and vice versa, with detailed explanations and practical examples. 2. Converting a Two-Dimensional Array to a Stream We’ll discuss two ways to solve this problem. The first is converting it to a stream of rows and the second one is to convert it to flat streams. 2.1. Convert a 2D Array to a Stream of Rows To convert a 2D array to a stream of its rows, we can use the Arrays.stream() method. Let’s see the respective test case: int[][] array2D = { { 1, 2, 3 }, { 4, 5, 6 }, { 7, 8, 9 } }; Stream<int[]> streamOfRows = Arrays.stream(array2D); int[][] resultArray2D = streamOfRows.toArray(int[][]::new); assertArrayEquals(array2D, resultArray2D); This creates a Stream<int[]> where each element in the stream is an array representing a row of the original 2D array. 2.2. Convert to a Flat Stream If we want to flatten the 2D array into a single stream of elements, we can use the flatMapToInt() method. Let’s see a test case showing how to implement it: int[][] array2D = { { 1, 2, 3 }, { 4, 5, 6 }, { 7, 8, 9 } }; IntStream flatStream = Arrays.stream(array2D) .flatMapToInt(Arrays::stream); int[] resultFlatArray = flatStream.toArray(); int[] expectedFlatArray = { 1, 2, 3, 4, 5, 6, 7, 8, 9 }; assertArrayEquals(expectedFlatArray, resultFlatArray); This method takes a function that maps each row (array) to an IntStream, and then flattens these streams into a single IntStream. 3. Converting a Stream to a Two-Dimensional Array Let’s look at two ways to convert a stream to a 2D array. 3.1. Convert a Stream of Rows to a 2D Array To convert a stream of rows (arrays) back into a 2D array, we can use the Stream.toArray() method. We must provide an array generator function that creates a 2D array of the required type. Let’s see how it can be done: int[][] originalArray2D = { { 1, 2, 3 }, { 4, 5, 6 }, { 7, 8, 9 } }; Stream<int[]> streamOfRows = Arrays.stream(originalArray2D); int[][] resultArray2D = streamOfRows.toArray(int[][]::new); assertArrayEquals(originalArray2D, resultArray2D); This way we easily converted the stream to the 2D array. 3.2. Convert a Flat Stream to a 2D Array If we have a flat stream of elements and want to convert it into a 2D array, we need to know the dimensions of the target array. We can first collect the stream into a flat array and then populate the 2D array accordingly. Let’s see how: int[] flatArray = { 1, 2, 3, 4, 5, 6, 7, 8, 9 }; IntStream flatStream = Arrays.stream(flatArray); int rows = 3; int cols = 3; int[][] expectedArray2D = { { 1, 2, 3 }, { 4, 5, 6 }, { 7, 8, 9 } }; int[][] resultArray2D = new int[rows][cols]; int[] collectedArray = flatStream.toArray(); for (int i = 0; i < rows; i++) { System.arraycopy(collectedArray, i * cols, resultArray2D[i], 0, cols); } assertArrayEquals(expectedArray2D, resultArray2D); As a result, we’ll get our resultant 2D array. 4. Conclusion In this article, we saw that converting between 2D arrays and streams in Java is a valuable skill that can simplify many programming tasks, especially when dealing with large datasets or performing complex transformations. By understanding how to effectively convert a 2D array to a stream of rows or a flat stream, and then reassemble them back into a 2D array, we can leverage the full power of Java’s Stream API for more efficient and readable code. The provided examples and unit tests serve as a practical guide to help us master these conversions, ensuring our code remains clean and maintainable. As always, the source code of all these examples is available over on GitHub.

Guide to Choosing Between Protocol Buffers and JSON

  • JSON
  • Protobuf

Learn about the differences between JSON and Protocol Buffers in their usage as data serialization formats.       

1. Overview Protocol Buffers (Protobuf) and JSON are popular data serialization formats but differ significantly in readability, performance, efficiency, and size. In this tutorial, we’ll compare these formats and explore their trade-offs. This will help us make informed decisions based on the use case when we need to choose one over the other. 2. Readability and Schema Requirements Protobuf requires a predefined schema to define the structure of the data. It’s a strict requirement without which our application can’t interpret the binary data. To get a better understanding, let’s see a sample schema.proto file: syntax = "proto3"; message User { string name = 1; int32 age = 2; string email = 3; } message UserList { repeated User users = 1; } Further, if we see a sample Protobuf message in base64 encoding, it lacks human readability: ChwKBUFsaWNlEB4aEWFsaWNlQGV4YW1wbGUuY29tChgKA0JvYhAZGg9ib2JAZXhhbXBsZS5jb20= Our application can only interpret this data in conjunction with the schema file. On the other hand, if we were to represent the same data in JSON format, we can do it without relying on any strict schema: { "users": [ { "name": "Alice", "age": 30, "email": "alice@example.com" }, { "name": "Bob", "age": 25, "email": "bob@example.com" } ] } Additionally, the encoded data is perfectly human-readable. However, if our project requires strict validation of JSON data, we can use JSON Schema, a powerful tool for defining and validating the structure of JSON data. While it offers significant benefits, its use is optional. 3. Schema Evolution Protobuf enforces a strict schema, ensuring strong data integrity, whereas JSON can facilitate schema-on-read data handling. Let’s learn how both data formats support the evolution of the underlying data schema but in different ways. 3.1. Backward Compatibility for Consumer Parsing Backward compatibility means new code can still read data written by older code. So, it requires that a newer version correctly deserializes the data serialized using an older schema version. To ensure backward compatibility with JSON, the application should be designed to ignore unrecognized fields during deserialization. In addition, the consumer should provide default values for any unset fields. With Protocol Buffers, we can add default values directly in the schema itself, enhancing compatibility and simplifying data handling. Further, any schema change for Protobuf must follow best practices to maintain backward compatibility. If we’re adding a new field, we must use a unique field number that wasn’t previously used. Similarly, we need to deprecate unused fields and reserve them to prevent any reuse of field numbers that could break backward compatibility. Although we can maintain backward compatibility while using both formats, the mechanism for protocol buffers is more formal and strict. 3.2. Forward Compatibility for Consumer Parsing Forward compatibility means old code can read data written by newer code. It requires that an older version correctly deserialize the data serialized by a newer schema version. Since the old code cannot anticipate all potential changes to data semantics that may occur, it’s trickier to maintain forward compatibility. For forward compatibility, the old code must ignore unknown properties and depend on the new schema to preserve the original data semantics. In the case of JSON, the application should be designed to ignore the unknown fields explicitly, which is easily achievable with most JSON parsers. On the contrary, Protocol Buffers has built-in capabilities to ignore unknown fields. So, protobufs can evolve with the assurance that unknown fields will be ignored. Lastly, it’s important to note that removing mandatory fields would break forward compatibility in both cases. So, the recommended practice involves deprecating the fields and gradually removing them. In the case of JSON, a common practice is to deprecate the fields in documentation and communicate to the consumers. On the other hand, Protocol Buffers allow a more formal mechanism to deprecate the fields within the schema definition. 4. Serialization, Deserialization, and Performance JSON serialization involves converting an object into a text-based format. On the other hand, Protobuf serialization converts an object into a compact binary format while complying with the definition from the .proto schema file. Since Protobuf can refer to the schema to identify the field names, it doesn’t need to preserve them with the data while serializing. As a result, the Protobuf format is far more space-efficient than JSON, which preserves the field names. By design, Protobuf generally outperforms JSON in terms of efficiency and performance. It typically takes up less storage space and generally completes the serialization and deserialization process much faster than the JSON data format. 5. When to Use JSON JSON is the de facto standard for web APIs, especially RESTful services. This is mainly due to its rich ecosystem of tools, libraries, and inherent compatibility with JavaScript. Moreover, the text-based nature makes it easy to debug and edit. So, using JSON for configuration data is a natural choice, as configurations should be easy for humans to understand and edit. Another interesting use case where it’s preferred to use JSON format is logging. Due to its schema-less nature, it provides great flexibility in collecting logs from different applications into a centralized location without maintaining strict schemas. Lastly, it’s important to note that when working with Protobuf, a special schema-aware client and additional tooling is needed, whereas, for JSON, no special client is needed since JSON is a plain text format. So, we’ll likely benefit from the JSON format while developing a prototype or MVP solution because it allows us to introduce changes with less effort. 6. When to Use Protocol Buffers Protocol Buffers are pretty efficient for storage and transfer over the network. Additionally, they enforce strict rules for data integrity through schema definition. So, we’re likely to benefit from them for such use cases. Applications that deal with real-time analytics, gaming, and financial systems are expected to be super-performant. So, we must evaluate the possibility of using Protobuf in such scenarios, especially for internal communications. Additionally, distributed database systems could benefit from Protobuf’s small memory footprint. So, Protocol Buffers are an excellent choice for encoding data and metadata for efficient data storage and high performance in data access. 7. Conclusion In this article, we explored the key differences between the JSON and Protocol Buffers data formats to enable informed decision-making while formulating the data encoding strategy for our application. JSON’s human readability and flexibility make it ideal for use cases such as web APIs, configuration files, and logging. In contrast, Protocol Buffers offer superior performance and efficiency, making them suitable for real-time analytics, gaming, and distributed storage systems.

Java Weekly, Issue 554

  • Weekly Review
  • no-ads
  • no-before-post
  • no-optins

This Java Weekly is a quick summer read with some interesting AI and Spring Boot goodness.       

1. Spring and Java >> Spring Boot 3.3 Boosts Performance, Security, and Observability [infoq.com] It is always cool to see just how much Boot is improving from version to version. Lots of goodness here. >> Spring AI with Groq – a blazingly fast AI inference engine [spring.io] The integration and adoption of AI into the Spring community has been fast, but done well. A good piece to understand a piece of the puzzle. Also worth reading: >> Common I/O Tasks in Modern Java [dev.java] >> OpenTelemetry Tracing on Spring Boot, Java Agent vs. Micrometer Tracing [frankel.ch] >> Netflix Adopts Virtual Threads: A Case Study on Performance and Pitfalls [infoq.com] Webinars and presentations: >> Video: Easy Implementation of a Client-Server Application in Java with FEPCOS-J [foojay.io] >> Spring Tips: Spring Security method security with special guest Rob Winch [spring.io] >> How to Read a JDK Enhancement Proposal – Inside Java Newscast #74 [inside.java] >> A Bootiful Podcast: Observability legend Jonatan Ivanov on the latest and greatest in Micrometer [spring.io] Time to upgrade: >> Hibernate 7.0.0.Beta1 released [relation.to] >> Hibernate Validator 9.0.0.Beta2 is out released [relation.to] >> quarkus 3.13.0 released [github.com] 2. Technical & Musings >> Leveraging Hibernate Search capabilities in a Quarkus application without a database [quarkus.io] Solid tutorial to follow along with, if only to explore an interesting, useful implementation. Also worth reading: >> Instead of restricting AI and algorithms, make them explainable [martinfowler.com] >> Differentiating rate limits in Apache APISIX [foojay.io] >> Making Use of ‘Silly’ Advice: Part 3 [jbrains.ca] 3. Pick of the Week >> A manifesto for small teams doing important work [seths.blog]

Build a Conversational AI With Apache Camel, LangChain4j, and WhatsApp

  • Artificial Intelligence
  • Spring Boot
  • Apache Camel

Learn how to integrate Apache Camel and LangChain4j into a Spring Boot application to handle AI-driven conversations over WhatsApp.Click the icon below to watch.        

1. Overview In this tutorial, we’ll see how to integrate Apache Camel and LangChain4j into a Spring Boot application to handle AI-driven conversations over WhatsApp, using a local installation of Ollama for AI processing. Apache Camel handles the routing and transformation of data between different systems, while LangChain4j provides the tools to interact with large language models and extract meaningful information. We discussed Ollama’s key benefits, installation, and hardware requirements in our tutorial How to Install Ollama Generative AI on Linux. Anyway, it’s cross-platform and available for Windows and macOS as well. We’ll use Postman to test the Ollama API, the WhatsApp API, and our Spring Boot controllers. 2. Initial Setup of Spring Boot First, let’s make sure that local port 8080 is unused, as we’ll need it for Spring Boot. Since we’ll be using the @RequestParam annotation to bind request parameters to Spring Boot controllers, we need to add the -parameters compiler argument: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>17</source> <target>17</target> <compilerArgs> <arg>-parameters</arg> </compilerArgs> </configuration> </plugin> If we miss it, information about parameter names won’t be available via reflection, so our REST calls will throw a java.lang.IllegalArgumentException. In addition, DEBUG-level logging of incoming and outgoing messages can help us, so let’s enable it in application.properties: # Logging configuration logging.level.root=INFO logging.level.com.baeldung.chatbot=DEBUG In case of trouble, we can also analyze the local network traffic between Ollama and Spring Boot with tcpdump for Linux and macOS, or windump for Windows. On the other hand, sniffing traffic between Spring Boot and WhatApp Cloud is much more difficult because it’s over the HTTPS protocol. 3. LangChain4j for Ollama A typical Ollama installation is listening on port 11434. In this case, we’ll run it with the qwen2:1.5b model because it’s fast enough for chatting, but we’re free to choose any other model. LangChain4j gives us several ChatLanguageModel.generate(..) methods that differ in their parameters. All these methods call Ollama’s REST API /api/chat, as we can verify by inspecting the network traffic. So let’s make sure it works properly, using one of the JSON examples in the Ollama documentation: Our query got a valid JSON response, so we’re ready to go to LangChain4j. In case of trouble, let’s make sure to respect the case of the parameters. For example, “role”: “user” will produce a correct response, while “role”: “USER” won’t. 3.1. Configuring LangChain4j In the pom.xml, we need two dependencies for LangChain4j. We can check the latest version from the Maven repository: <dependency> <groupId>dev.langchain4j</groupId> <artifactId>langchain4j-core</artifactId> <version>0.33.0</version> </dependency> <dependency> <groupId>dev.langchain4j</groupId> <artifactId>langchain4j-ollama</artifactId> <version>0.33.0</version> </dependency> Then let’s add these parameters to application.properties: # Ollama API configuration ollama.api_url=http://localhost:11434/ ollama.model=qwen2:1.5b ollama.timeout=30 ollama.max_response_length=1000 The parameters ollama.timeout and ollama.max_response_length are optional. We included them as a safety measure because some models are known to have a bug that causes a loop in the response process. 3.2. Implementing ChatbotService Using the @Value annotation, let’s inject these values from application.properties at runtime, ensuring that the configuration is decoupled from the application logic: @Value("${ollama.api_url}") private String apiUrl; @Value("${ollama.model}") private String modelName; @Value("${ollama.timeout}") private int timeout; @Value("${ollama.max_response_length}") private int maxResponseLength; Here is the initialization logic that needs to be run once the service bean is fully constructed. The OllamaChatModel object holds the configuration necessary to interact with the conversational AI model: private OllamaChatModel ollamaChatModel; @PostConstruct public void init() { this.ollamaChatModel = OllamaChatModel.builder() .baseUrl(apiUrl) .modelName(modelName) .timeout(Duration.ofSeconds(timeout)) .numPredict(maxResponseLength) .build(); } This method gets a question, sends it to the chat model, receives the response, and handles any errors that may occur during the process: public String getResponse(String question) { logger.debug("Sending to Ollama: {}", question); String answer = ollamaChatModel.generate(question); logger.debug("Receiving from Ollama: {}", answer); if (answer != null && !answer.isEmpty()) { return answer; } else { logger.error("Invalid Ollama response for:\n\n" + question); throw new ResponseStatusException( HttpStatus.SC_INTERNAL_SERVER_ERROR, "Ollama didn't generate a valid response", null); } } We’re ready for the controller. 3.3. Creating ChatbotController This controller is helpful during development to test if ChatbotService works properly: @Autowired private ChatbotService chatbotService; @GetMapping("/api/chatbot/send") public String getChatbotResponse(@RequestParam String question) { return chatbotService.getResponse(question); } Let’s give it a try: It works as expected. 4. Apache Camel for WhatsApp Before we continue, let’s create an account on Meta for Developers. For our testing purposes, using the WhatsApp API is free. 4.1. ngrok Reverse Proxy To integrate a local Spring Boot application with WhatsApp Business services, we need a cross-platform reverse proxy like ngrok connected to a free static domain. It creates a secure tunnel from a public URL with HTTPS protocol to our local server with HTTP protocol, allowing WhatsApp to communicate with our application. In this command, let’s replace xxx.ngrok-free.app with the static domain assigned to us by ngrok: ngrok http --domain=xxx.ngrok-free.app 8080 This forwards https://xxx.ngrok-free.app to http://localhost:8080. 4.2. Setting up Apache Camel The first dependency, camel-spring-boot-starter, integrates Apache Camel into a Spring Boot application and provides the necessary configurations for Camel routes. The second dependency, camel-http-starter, supports the creation of HTTP(S)-based routes, enabling the application to handle HTTP and HTTPS requests. The third dependency, camel-jackson, facilitates JSON processing with the Jackson library, allowing Camel routes to transform and marshal JSON data: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-boot-starter</artifactId> <version>4.7.0</version> </dependency> <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-http-starter</artifactId> <version>4.7.0</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jackson</artifactId> <version>4.7.0</version> </dependency> We can check the latest version of Apache Camel from the Maven repository. Finally, let’s add this configuration to application.properties: # WhatsApp API configuration whatsapp.verify_token=BaeldungDemo-Verify-Token whatsapp.api_url=https://graph.facebook.com/v20.0/PHONE_NUMBER_ID/messages whatsapp.access_token=ACCESS_TOKEN Getting the actual values of PHONE_NUMBER_ID and ACCESS_TOKEN to replace in the values of the properties isn’t trivial. We’ll see how to do it in detail. 4.3. Controller to Verify Webhook Token As a preliminary step, we also need a Spring Boot controller to validate the WhatsApp webhook token. The purpose is to verify our webhook endpoint before it starts receiving actual data from the WhatsApp service: @Value("${whatsapp.verify_token}") private String verifyToken; @GetMapping("/webhook") public String verifyWebhook(@RequestParam("hub.mode") String mode, @RequestParam("hub.verify_token") String token, @RequestParam("hub.challenge") String challenge) { if ("subscribe".equals(mode) && verifyToken.equals(token)) { return challenge; } else { return "Verification failed"; } } So, let’s recap what we’ve done so far: ngrok exposes our local Spring Boot server on a public IP with HTTPS Apache Camel dependencies are added We have a controller to validate the WhatsApp webhook token However, we don’t have the actual values of PHONE_NUMBER_ID and ACCESS_TOKEN yet It’s time to set up our WhatsApp Business account to get such values and subscribe to the webhook service. 4.4. WhatsApp Business Account The official Get Started guide is quite difficult to follow and doesn’t fit our needs. That’s why the upcoming videos will be helpful to get the relevant steps for our Spring Boot application. After creating a business portfolio named “Baeldung Chatbot”, let’s create our business app: Then let’s get the ID of our WhatsApp business phone number, copy it inside the whatsapp.api_url in application.properties, and send a test message to our personal cell phone. Let’s bookmark this Quickstart API Setup page because we may need it during code development: At this point, we should have received this message on our cell phone: Now we need the whatsapp.access_token value in application.properties. Let’s go to System Users to generate a token with no expiration, using an account with administrator full access to our app: We’re ready to configure our webhook endpoint, which we previously created with the @GetMapping(“/webhook”) controller. Let’s start our Spring Boot application before continuing. As webhook’s callback URL, we need to insert our ngrok static domain suffixed with /webhook, whereas our verification token is BaeldungDemo-Verify-Token: It’s important to follow these steps in the order we’ve shown them to avoid errors. 4.5. Configuring WhatsAppService to Send Messages As a reference, before we get into the init() and sendWhatsAppMessage(…) methods, let’s send a text message to our phone using Postman. This way we can see the required JSON and headers and compare them to the code. The Authorization header value is composed of Bearer followed by a space and our whatsapp.access_token, while the Content-Type header is handled automatically by Postman: The JSON structure is quite simple. We have to be careful that the HTTP 200 response code doesn’t mean that the message was actually sent. We’ll only receive it if we’ve started a conversation by sending a message from our mobile phone to our WhatsApp business number. In other words, the chatbot we create can never initiate a conversation, it can only answer users’ questions: That said, let’s inject whatsapp.api_url and whatsapp.access_token: @Value("${whatsapp.api_url}") private String apiUrl; @Value("${whatsapp.access_token}") private String apiToken; The init() method is responsible for setting up the necessary configurations for sending messages via the WhatsApp API. It defines and adds a new route to the CamelContext, which is responsible for handling the communication between our Spring Boot application and the WhatsApp service. Within this route configuration, we specify the headers required for authentication and content type, replicating the headers used when we tested the API with Postman: @Autowired private CamelContext camelContext; @PostConstruct public void init() throws Exception { camelContext.addRoutes(new RouteBuilder() { @Override public void configure() { JacksonDataFormat jacksonDataFormat = new JacksonDataFormat(); jacksonDataFormat.setPrettyPrint(true); from("direct:sendWhatsAppMessage") .setHeader("Authorization", constant("Bearer " + apiToken)) .setHeader("Content-Type", constant("application/json")) .marshal(jacksonDataFormat) .process(exchange -> { logger.debug("Sending JSON: {}", exchange.getIn().getBody(String.class)); }).to(apiUrl).process(exchange -> { logger.debug("Response: {}", exchange.getIn().getBody(String.class)); }); } }); } This way, the direct:sendWhatsAppMessage endpoint allows the route to be triggered programmatically within the application, ensuring that the message is properly marshaled by Jackson and sent with the necessary headers. The sendWhatsAppMessage(…) uses the Camel ProducerTemplate to send the JSON payload to the direct:sendWhatsAppMessage route. The structure of the HashMap follows the JSON structure we previously used with Postman. This method ensures seamless integration with the WhatsApp API, providing a structured way to send messages from the Spring Boot application: @Autowired private ProducerTemplate producerTemplate; public void sendWhatsAppMessage(String toNumber, String message) { Map<String, Object> body = new HashMap<>(); body.put("messaging_product", "whatsapp"); body.put("to", toNumber); body.put("type", "text"); Map<String, String> text = new HashMap<>(); text.put("body", message); body.put("text", text); producerTemplate.sendBody("direct:sendWhatsAppMessage", body); } The code for sending messages is ready. 4.6. Configuring WhatsAppService to Receive Messages To handle incoming messages from our WhatsApp users, the processIncomingMessage(…) method processes the payload received from our webhook endpoint, extracts relevant information such as the sender’s phone number and the message content, and then generates an appropriate response using our chatbot service. Finally, it uses the sendWhatsAppMessage(…) method to send Ollama’s response back to the user: @Autowired private ObjectMapper objectMapper; @Autowired private ChatbotService chatbotService; public void processIncomingMessage(String payload) { try { JsonNode jsonNode = objectMapper.readTree(payload); JsonNode messages = jsonNode.at("/entry/0/changes/0/value/messages"); if (messages.isArray() && messages.size() > 0) { String receivedText = messages.get(0).at("/text/body").asText(); String fromNumber = messages.get(0).at("/from").asText(); logger.debug(fromNumber + " sent the message: " + receivedText); this.sendWhatsAppMessage(fromNumber, chatbotService.getResponse(receivedText)); } } catch (Exception e) { logger.error("Error processing incoming payload: {} ", payload, e); } } The next step is to write the controllers to test our WhatsAppService methods. 4.7. Creating the WhatsAppController The sendWhatsAppMessage(…) controller will be useful during development to test the process of sending messages: @Autowired private WhatsAppService whatsAppService; @PostMapping("/api/whatsapp/send") public String sendWhatsAppMessage(@RequestParam String to, @RequestParam String message) { whatsAppService.sendWhatsAppMessage(to, message); return "Message sent"; } Let’s give it a try: It works as expected. Everything is ready for writing the receiveMessage(…) controller which will receive messages sent by users: @PostMapping("/webhook") public void receiveMessage(@RequestBody String payload) { whatsAppService.processIncomingMessage(payload); } This is the final test: Ollama answered our math question using LaTeX syntax. The qwen2:1.5b LLM we’re using supports 29 languages, and here’s the full list. 5. Conclusion In this article, we demonstrated how to integrate Apache Camel and LangChain4j into a Spring Boot application to manage AI-driven conversations over WhatsApp, using a local installation of Ollama for AI processing. We started by setting up Ollama and configuring our Spring Boot application to handle request parameters. We then integrated LangChain4j to interact with an Ollama model, using ChatbotService to handle AI responses and ensure seamless communication. For WhatsApp integration, we set up a WhatsApp Business account and used ngrok as a reverse proxy to facilitate communication between our local server and WhatsApp. We configured Apache Camel and created WhatsAppService to process incoming messages, generate responses using our ChatbotService, and respond appropriately. We tested ChatbotService and WhatsAppService using dedicated controllers to ensure full functionality. As always, the full source code is available over on GitHub. Click the icon below to watch.

Java Enums With All HTTP Status Codes

  • Java Web
  • REST
  • Apache HttpClient
  • Enums
  • OkHttp
  • RestTemplate

Learn about how to use an enum to represent the possible HTTP status codes.       

1. Introduction Enums provide a powerful way to define a set of named constants in the Java programming language. These are useful for representing fixed sets of related values, such as HTTP status codes. As we know, all web servers on the Internet issue HTTP status codes as standard response codes. In this tutorial, we’ll delve into creating a Java enum that includes all the HTTP status codes. 2. Understanding HTTP Status Codes HTTP status codes play a crucial role in web communication by informing clients of the results of their requests. Furthermore, servers issue these three-digit codes, which fall into five categories, each serving a specific function in the HTTP protocol. 3. Benefits of Using Enums for HTTP Status Codes Enumerating HTTP status codes in Java offers several advantages, including: Type Safety: using enum ensures type safety, making our code more readable and maintainable Grouped Constants: Enums group related constants together, providing a clear and structured way to handle fixed sets of values Avoiding Hardcoded Values: defining HTTP status codes as enum helps prevent errors from hardcoded strings or integers Enhanced Clarity and Maintainability: this approach promotes best practices in software development by enhancing clarity, reducing bugs, and improving code maintainability 4. Basic Approach To effectively manage HTTP status codes in our Java applications, we can define an enum that encapsulates all standard HTTP status codes and their descriptions. This approach allows us to leverage the benefits of enum, such as type safety and code clarity. Let’s start by defining the HttpStatus enum: public enum HttpStatus { CONTINUE(100, "Continue"), SWITCHING_PROTOCOLS(101, "Switching Protocols"), OK(200, "OK"), CREATED(201, "Created"), ACCEPTED(202, "Accepted"), MULTIPLE_CHOICES(300, "Multiple Choices"), MOVED_PERMANENTLY(301, "Moved Permanently"), FOUND(302, "Found"), BAD_REQUEST(400, "Bad Request"), UNAUTHORIZED(401, "Unauthorized"), FORBIDDEN(403, "Forbidden"), NOT_FOUND(404, "Not Found"), INTERNAL_SERVER_ERROR(500, "Internal Server Error"), NOT_IMPLEMENTED(501, "Not Implemented"), BAD_GATEWAY(502, "Bad Gateway"), UNKNOWN(-1, "Unknown Status"); private final int code; private final String description; HttpStatus(int code, String description) { this.code = code; this.description = description; } public static HttpStatus getStatusFromCode(int code) { for (HttpStatus status : HttpStatus.values()) { if (status.getCode() == code) { return status; } } return UNKNOWN; } public int getCode() { return code; } public String getDescription() { return description; } } Each constant in this Enum is associated with an integer code and a string description. Moreover, the constructor initializes these values, and we provide getter methods to retrieve them. Let’s create unit tests to ensure our HttpStatus Enum class works correctly: @Test public void givenStatusCode_whenGetCode_thenCorrectCode() { assertEquals(100, HttpStatus.CONTINUE.getCode()); assertEquals(200, HttpStatus.OK.getCode()); assertEquals(300, HttpStatus.MULTIPLE_CHOICES.getCode()); assertEquals(400, HttpStatus.BAD_REQUEST.getCode()); assertEquals(500, HttpStatus.INTERNAL_SERVER_ERROR.getCode()); } @Test public void givenStatusCode_whenGetDescription_thenCorrectDescription() { assertEquals("Continue", HttpStatus.CONTINUE.getDescription()); assertEquals("OK", HttpStatus.OK.getDescription()); assertEquals("Multiple Choices", HttpStatus.MULTIPLE_CHOICES.getDescription()); assertEquals("Bad Request", HttpStatus.BAD_REQUEST.getDescription()); assertEquals("Internal Server Error", HttpStatus.INTERNAL_SERVER_ERROR.getDescription()); } Here, we verify that the getCode() and getDescription() methods return the correct values for various HTTP status codes. The first test method checks if the getCode() method returns the correct integer code for each enum constant. Similarly, the second test method ensures the getDescription() method returns the appropriate string description. 5. Using Apache HttpComponents Apache HttpComponents is a popular library for HTTP communication. To use it with Maven, we include the following dependency in our pom.xml: <dependency> <groupId>org.apache.httpcomponents.client5</groupId> <artifactId>httpclient5</artifactId> <version>5.3.1</version> </dependency> We can find more details about this dependency on Maven Central. We can use our HttpStatus enum to handle HTTP responses: @Test public void givenHttpRequest_whenUsingApacheHttpComponents_thenCorrectStatusDescription() throws IOException { CloseableHttpClient httpClient = HttpClients.createDefault(); HttpGet request = new HttpGet("http://example.com"); try (CloseableHttpResponse response = httpClient.execute(request)) { String reasonPhrase = response.getStatusLine().getReasonPhrase(); assertEquals("OK", reasonPhrase); } } Here, we start by creating a CloseableHttpClient instance using the createDefault() method. Moreover, this client is responsible for making HTTP requests. We then construct an HTTP GET request to http://example.com with new HttpGet(“http://example.com”). By executing the request with the execute() method, we receive a CloseableHttpResponse object. From this response, we extract the status code using response.getStatusLine().getStatusCode(). We then use HttpStatusUtil.getStatusDescription() to retrieve the status description associated with the status code. Finally, we use assertEquals() to ensure that the description matches the expected value, verifying that our status code handling is accurate. 6. Using RestTemplate Framework Spring Framework’s RestTemplate can also benefit from our HttpStatus enum for handling HTTP responses. Let’s first include the following dependency in our pom.xml: <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>6.1.11</version> </dependency> We can find more details about this dependency on Maven Central. Let’s explore how we can utilize this approach with a simple implementation: @Test public void givenHttpRequest_whenUsingSpringRestTemplate_thenCorrectStatusDescription() { RestTemplate restTemplate = new RestTemplate(); ResponseEntity<String> response = restTemplate.getForEntity("http://example.com", String.class); int statusCode = response.getStatusCode().value(); String statusDescription = HttpStatus.getStatusFromCode(statusCode).getDescription(); assertEquals("OK", statusDescription); } Here, we create a RestTemplate instance to perform an HTTP GET request. After obtaining the ResponseEntity object, we extract the status code using response.getStatusCode().value(). We then pass this status code to HttpStatus.getStatusFromCode.getDescription() to retrieve the corresponding status description. 7. Using OkHttp Library OkHttp is another widely-used HTTP client library in Java. Let’s incorporate this library into the Maven project by adding the following dependency to our pom.xml: <dependency> <groupId>com.squareup.okhttp3</groupId> <artifactId>okhttp</artifactId> <version>4.12.0</version> </dependency> We can find more details about this dependency on Maven Central. Now, let’s integrate our HttpStatus enum with OkHttp to handle responses: @Test public void givenHttpRequest_whenUsingOkHttp_thenCorrectStatusDescription() throws IOException { OkHttpClient client = new OkHttpClient(); Request request = new Request.Builder() .url("http://example.com") .build(); try (Response response = client.newCall(request).execute()) { int statusCode = response.code(); String statusDescription = HttpStatus.getStatusFromCode(statusCode).getDescription(); assertEquals("OK", statusDescription); } } In this test, we initialize an OkHttpClient instance and create an HTTP GET request using Request.Builder(). We then execute the request with the client.newCall(request).execute() method and obtain the Response object. We extract the status code using the response.code() method and pass it to the HttpStatus.getStatusFromCode.getDescription() method to get the status description. 8. Conclusion In this article, we discussed using Java enum to represent HTTP status codes, enhancing code readability, maintainability, and type safety. Whether we opt for a simple enum definition or use it with various Java libraries like Apache HttpComponents, Spring RestTemplate Framework, or OkHttp, Enums are robust enough to handle fixed sets of related constants in Java. As usual, we can find the full source code and examples over on GitHub.

Removing ROLE_ Prefix in Spring Security

  • Spring Security
  • Spring Security 5

Learn how to provide access in Spring Security wihtout the ROLE_ Prefix.       

1. Overview Sometimes, when configuring application security, our user details might not include the ROLE_ prefix that Spring Security expects. As a result, we encounter “Forbidden” authorization errors and cannot access our secured endpoints. In this tutorial, we’ll explore how to reconfigure Spring Security to allow the use of roles without the ROLE_ prefix. 2. Spring Security Default Behaviour We’ll start by demonstrating the default behavior of the Spring security role-checking mechanism. Let’s add an InMemoryUserDetailsManager that contains only one user with an ADMIN role: @Configuration public class UserDetailsConfig { @Bean public InMemoryUserDetailsManager userDetailsService() { UserDetails admin = User.withUsername("admin") .password(encoder().encode("password")) .authorities(singletonList(new SimpleGrantedAuthority("ADMIN"))) .build(); return new InMemoryUserDetailsManager(admin); } @Bean public PasswordEncoder encoder() { return new BCryptPasswordEncoder(); } } We’ve created the UserDetailsConfig configuration class that produces an InMemoryUserDetailsManager bean. Inside the factory method, we’ve used a PasswordEncoder required for user details passwords. Next, we’ll add the endpoint we want to call: @RestController public class TestSecuredController { @GetMapping("/test-resource") public ResponseEntity<String> testAdmin() { return ResponseEntity.ok("GET request successful"); } } We’ve added a simple GET endpoint that should return a 200 status code. Let’s create a security configuration: @Configuration @EnableWebSecurity public class DefaultSecurityJavaConfig { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { return http.authorizeHttpRequests (authorizeRequests -> authorizeRequests .requestMatchers("/test-resource").hasRole("ADMIN")) .httpBasic(withDefaults()) .build(); } } Here we’ve created a SecurityFilterChain bean where we specified that only the users with ADMIN role can access the test-resource endpoint. Now, let’s add these configurations to our test context and call our secured endpoint: @WebMvcTest(controllers = TestSecuredController.class) @ContextConfiguration(classes = { DefaultSecurityJavaConfig.class, UserDetailsConfig.class, TestSecuredController.class }) public class DefaultSecurityFilterChainIntegrationTest { @Autowired private WebApplicationContext wac; private MockMvc mockMvc; @BeforeEach void setup() { mockMvc = MockMvcBuilders .webAppContextSetup(wac) .apply(SecurityMockMvcConfigurers.springSecurity()) .build(); } @Test void givenDefaultSecurityFilterChainConfig_whenCallTheResourceWithAdminRole_thenForbiddenResponseCodeExpected() throws Exception { MockHttpServletRequestBuilder with = MockMvcRequestBuilders.get("/test-resource") header("Authorization", basicAuthHeader("admin", "password")); ResultActions performed = mockMvc.perform(with); MvcResult mvcResult = performed.andReturn(); assertEquals(403, mvcResult.getResponse().getStatus()); } } We’ve attached the user details configuration, security configuration, and the controller bean to our test context. Then, we called the test resource using admin user credentials, sending them in the Basic Authorization header. But instead of the 200 response code, we face the Forbidden response code 403. If we deep dive into how the AuthorityAuthorizationManager.hasRole() method works, we’ll see the following code: public static <T> AuthorityAuthorizationManager<T> hasRole(String role) { Assert.notNull(role, "role cannot be null"); Assert.isTrue(!role.startsWith(ROLE_PREFIX), () -> role + " should not start with " + ROLE_PREFIX + " since " + ROLE_PREFIX + " is automatically prepended when using hasRole. Consider using hasAuthority instead."); return hasAuthority(ROLE_PREFIX + role); } As we can see – the ROLE_PREFIX is hardcoded here and all the roles should contain it to pass a verification. We face a similar behavior when using method security annotations such as @RolesAllowed. 3. Use Authorities Instead of Roles The simplest way to solve this issue is to use authorities instead of roles. Authorities don’t require the expected prefixes. If we’re comfortable using them, choosing authorities helps us avoid problems related to prefixes. 3.1. SecurityFilterChain-Based Configuration Let’s modify our user details in the UserDetailsConfig class: @Configuration public class UserDetailsConfig { @Bean public InMemoryUserDetailsManager userDetailsService() { PasswordEncoder encoder = PasswordEncoderFactories.createDelegatingPasswordEncoder(); UserDetails admin = User.withUsername("admin") .password(encoder.encode("password")) .authorities(Arrays.asList(new SimpleGrantedAuthority("ADMIN"), new SimpleGrantedAuthority("ADMINISTRATION"))) .build(); return new InMemoryUserDetailsManager(admin); } } We’ve added an authority called ADMINISTRATION for our admin user. Now we’ll create the security config based on authority access: @Configuration @EnableWebSecurity public class AuthorityBasedSecurityJavaConfig { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { return http.authorizeHttpRequests (authorizeRequests -> authorizeRequests .requestMatchers("/test-resource").hasAuthority("ADMINISTRATION")) .httpBasic(withDefaults()) .build(); } } In this configuration, we’ve implemented the same access restriction concept but used the AuthorityAuthorizationManager.hasAuthority() method. Let’s set the new security configuration into a context and call our secured endpoint: @WebMvcTest(controllers = TestSecuredController.class) @ContextConfiguration(classes = { AuthorityBasedSecurityJavaConfig.class, UserDetailsConfig.class, TestSecuredController.class }) public class AuthorityBasedSecurityFilterChainIntegrationTest { @Autowired private WebApplicationContext wac; private MockMvc mockMvc; @BeforeEach void setup() { mockMvc = MockMvcBuilders .webAppContextSetup(wac) .apply(SecurityMockMvcConfigurers.springSecurity()) .build(); } @Test void givenAuthorityBasedSecurityJavaConfig_whenCallTheResourceWithAdminAuthority_thenOkResponseCodeExpected() throws Exception { MockHttpServletRequestBuilder with = MockMvcRequestBuilders.get("/test-resource") .header("Authorization", basicAuthHeader("admin", "password")); ResultActions performed = mockMvc.perform(with); MvcResult mvcResult = performed.andReturn(); assertEquals(200, mvcResult.getResponse().getStatus()); } } As we can see, we could access the test resource using the same user with the authorities-based security configuration. 3.2. Annotation-Based Configuration To start using the annotation-based approach we first need to enable method security. Let’s create a security configuration with the @EnableMethodSecurity annotation: @Configuration @EnableWebSecurity @EnableMethodSecurity(jsr250Enabled = true) public class MethodSecurityJavaConfig { } Now, let’s add one more endpoint to our secured controller: @RestController public class TestSecuredController { @PreAuthorize("hasAuthority('ADMINISTRATION')") @GetMapping("/test-resource-method-security-with-authorities-resource") public ResponseEntity<String> testAdminAuthority() { return ResponseEntity.ok("GET request successful"); } } Here, we’ve used the @PreAuthorize annotation with a hasAuthority attribute, specifying our expected authority. After the preparation is ready, we can call our secured endpoint: @WebMvcTest(controllers = TestSecuredController.class) @ContextConfiguration(classes = { MethodSecurityJavaConfig.class, UserDetailsConfig.class, TestSecuredController.class }) public class AuthorityBasedMethodSecurityIntegrationTest { @Autowired private WebApplicationContext wac; private MockMvc mockMvc; @BeforeEach void setup() { mockMvc = MockMvcBuilders .webAppContextSetup(wac) .apply(SecurityMockMvcConfigurers.springSecurity()) .build(); } @Test void givenMethodSecurityJavaConfig_whenCallTheResourceWithAdminAuthority_thenOkResponseCodeExpected() throws Exception { MockHttpServletRequestBuilder with = MockMvcRequestBuilders .get("/test-resource-method-security-with-authorities-resource") .header("Authorization", basicAuthHeader("admin", "password")); ResultActions performed = mockMvc.perform(with); MvcResult mvcResult = performed.andReturn(); assertEquals(200, mvcResult.getResponse().getStatus()); } } We’ve attached the MethodSecurityJavaConfig and the same UserDetailsConfig to the test context. Then, we called the test-resource-method-security-with-authorities-resource endpoint and successfully accessed it. 4. Custom Authorization Manager for SecurityFilterChain If we need to use roles without the ROLE_ prefix, we must attach a custom AuthorizationManager to the SecurityFilterChain configuration. This custom manager won’t have hardcoded prefixes. Let’s create such an implementation: public class CustomAuthorizationManager implements AuthorizationManager<RequestAuthorizationContext> { private final Set<String> roles = new HashSet<>(); public CustomAuthorizationManager withRole(String role) { roles.add(role); return this; } @Override public AuthorizationDecision check(Supplier<Authentication> authentication, RequestAuthorizationContext object) { for (GrantedAuthority grantedRole : authentication.get().getAuthorities()) { if (roles.contains(grantedRole.getAuthority())) { return new AuthorizationDecision(true); } } return new AuthorizationDecision(false); } } We’ve implemented the AuthorizationManager interface. In our implementation, we can specify multiple roles that allow the call to pass the authority verification. In the check() method, we’re verifying if the authority from the authentication is in the set of our expected roles. Now, let’s attach our customer authorization manager to SecurityFilterChain: @Configuration @EnableWebSecurity public class CustomAuthorizationManagerSecurityJavaConfig { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http .authorizeHttpRequests (authorizeRequests -> { hasRole(authorizeRequests.requestMatchers("/test-resource"), "ADMIN"); }) .httpBasic(withDefaults()); return http.build(); } private void hasRole(AuthorizeHttpRequestsConfigurer.AuthorizedUrl authorizedUrl, String role) { authorizedUrl.access(new CustomAuthorizationManager().withRole(role)); } } Instead of AuthorityAuthorizationManager.hasRole() method, here we’ve used AuthorizeHttpRequestsConfigurer.access() which allows us to use our custom AuthorizationManager implementation. Now let’s configure the test context and call the secured endpoint: @WebMvcTest(controllers = TestSecuredController.class) @ContextConfiguration(classes = { CustomAuthorizationManagerSecurityJavaConfig.class, TestSecuredController.class, UserDetailsConfig.class }) public class RemovingRolePrefixIntegrationTest { @Autowired WebApplicationContext wac; private MockMvc mockMvc; @BeforeEach void setup() { mockMvc = MockMvcBuilders .webAppContextSetup(wac) .apply(SecurityMockMvcConfigurers.springSecurity()) .build(); } @Test public void givenCustomAuthorizationManagerSecurityJavaConfig_whenCallTheResourceWithAdminRole_thenOkResponseCodeExpected() throws Exception { MockHttpServletRequestBuilder with = MockMvcRequestBuilders.get("/test-resource") .header("Authorization", basicAuthHeader("admin", "password")); ResultActions performed = mockMvc.perform(with); MvcResult mvcResult = performed.andReturn(); assertEquals(200, mvcResult.getResponse().getStatus()); } } We’ve attached our CustomAuthorizationManagerSecurityJavaConfig and called the test-resource endpoint. As expected, we received the 200 response code. 5. Override GrantedAuthorityDefaults for Method Security In the annotation-based approach, we can override the prefix we’ll use with our roles. Let’s modify our MethodSecurityJavaConfig: @Configuration @EnableWebSecurity @EnableMethodSecurity(jsr250Enabled = true) public class MethodSecurityJavaConfig { @Bean GrantedAuthorityDefaults grantedAuthorityDefaults() { return new GrantedAuthorityDefaults(""); } } We’ve added the GrantedAuthorityDefaults bean and passed an empty string as the constructor parameter. This empty string will be used as the default role prefix. For this test case we’ll create a new secured endpoint: @RestController public class TestSecuredController { @RolesAllowed({"ADMIN"}) @GetMapping("/test-resource-method-security-resource") public ResponseEntity<String> testAdminRole() { return ResponseEntity.ok("GET request successful"); } } We’ve added the @RolesAllowed({“ADMIN”}) to this endpoint so only the users with the ADMIN role should be able to access it. Let’s call it and see what the response will be: @WebMvcTest(controllers = TestSecuredController.class) @ContextConfiguration(classes = { MethodSecurityJavaConfig.class, UserDetailsConfig.class, TestSecuredController.class }) public class RemovingRolePrefixMethodSecurityIntegrationTest { @Autowired WebApplicationContext wac; private MockMvc mockMvc; @BeforeEach void setup() { mockMvc = MockMvcBuilders .webAppContextSetup(wac) .apply(SecurityMockMvcConfigurers.springSecurity()) .build(); } @Test public void givenMethodSecurityJavaConfig_whenCallTheResourceWithAdminRole_thenOkResponseCodeExpected() throws Exception { MockHttpServletRequestBuilder with = MockMvcRequestBuilders.get("/test-resource-method-security-resource") .header("Authorization", basicAuthHeader("admin", "password")); ResultActions performed = mockMvc.perform(with); MvcResult mvcResult = performed.andReturn(); assertEquals(200, mvcResult.getResponse().getStatus()); } } We’ve successfully retrieved the 200 response code calling the test-resource-method-security-resource for the user having the ADMIN role without any prefixes. 6. Conclusion In this article, we’ve explored various approaches to avoid issues with ROLE_ prefixes in Spring Security. Some methods require customization, while others use the default functionality. The approaches described in the article can help us avoid adding prefixes to the roles in our user details, which may sometimes be impossible. As usual, the full source code can be found over on GitHub.