Java 9 Features

Just like Java 8 is known as the major release of lambdas, streams and API changes, similarly Java 9 is all about project Jigsaw, utilities and changes under the hood. In this post, I would like to talk about some of the most exciting features being targeted for Java 9. The full list of new features is available here.

Modular System – Jigsaw Project

This is the biggest addition to the JDK and would bring modularity to the Java platform. An increase in codebase, quite often leads to complicated, tangled “spaghetti code”. It makes it hard to encapsulate code and there is no clear dependencies between different parts (JAR files) of a system. Any public class can access any other public class based on the classpath. This can lead to inadvertent usage of classes that were not supposed to be public API. In fact, the classpath itself is not an elegant way of verifying if the required JARs are present or it there are any duplicate entries. These challenges are handled very well by the module system.

The capabilities of a modular system are quite similar to those of OSGi framework. Modules have an inherent concept of dependencies, can expose a public API and keep implementation details hidden/private. The biggest motivation is to provide modular JVM, which requires lesser memory footprint in order to run on devices. The JVM can then run with only those modules and APIs which are essential.

There is an additional module descriptor in modular JAR files. This module descriptor, has dependencies expressed through “requires” statements. In addition to this, “exports” statements control which packages are accessible to other modules.

Modular JAR files contain an additional module descriptor. In this module descriptor, dependencies on other modules are expressed through “requires” statements. Additionally, “exports” statements control which packages are accessible to other modules. All packages which are not exported are encapsulated in the module by default. Let’s see an example of a module descriptor, which lives in `module-info.java`:

module com.thistechnologylife.java9.modules.house {

requires com.thistechnologylife.java9.modules.furniture;

exports com.thistechnologylife.java9.modules.lawn;

}

The module house requires module furniture and exports a package for lawn.

JShell – The Interactive Java REPL

Jaba 9 comes with a new tool called “jshell”, which stands for Java Shell and also known as REPL (Read Evaluate Print Loop). It can be used to execute and test any Java Constructs like class, interface, enum, object, statements etc. very easily without wrapping them in a separate method or project.

JShell can be launched directly from the console and you can start typing and executing Java code. One great example of jshell is to test regular expressions.

The jshell executable itself can be found in <JAVA_HOME>/bin folder:

jdk-9\bin>jshell.exe

| Welcome to JShell -- Version 9

| For an introduction type: /help intro

jshell> "Say Hello To Java 9 Features".substring(13,19);

$5 ==> "Java 9"

Improved Network Communication with HTTP/2.0 Support

Java 9 comes with a new way of performing HTTP based communication. It provides a long-awaited replacement of the old HttpURLConnection and supports both WebSockets and HTTP/2. The new API is located under the java.net.http package.

HttpClient client = HttpClient.newHttpClient();

HttpRequest request =

HttpRequest.newBuilder(URI.create("http://www.amazon.com"))

.header("User-Agent","Java")

.GET()

.build();

HttpResponse<String> response = client.send(request, HttpResponse.BodyHandler.asString());

HttpClient also provides new APIs to deal with HTTP/2 features such as streams and server push.

Enhanced Process API

Process API had a limited capability to control and manage operating system processes. Even for getting process PID, you would need to either use native code or a tricky workaround. Not to forget, you would need different implementation for each platform.

In Java 9, the process API has been enhanced for controlling and managing operating-system processes. Most of the new functionality is present in the class java.lang.ProcessHandle. Process specific information can be obtained in the following manner:

ProcessHandle procHandle = ProcessHandle.current();

long PID = procHandle.getPid();

ProcessHandle.Info processInfo = procHandle.info();

Optional<String[]> args = processInfo.arguments();

Optional<String> cmd = processInfo.commandLine();

Optional<Instant> startTime = processInfo.startInstant();

Optional<Duration> cpuUsage = processInfo.totalCpuDuration();

As the example illustrates, the current method returns an object representing a process of currently running JVM. The Info subclass provides details about the process.

Now let’s see how we can stop all running processes using destroy method:

childProcess = ProcHandle.current().children();

childProcess.forEach(procHandle -> {

assertTrue("Could not kill process " + procHandle.getPid(), procHandle.destroy());

});

Stream API improvements

Java 9 brings significant improvements to the Stream API which packs a punch in creation of declarative pipelines of transformations on collections. Four new methods have been added to the java.util.Stream interface – dropWhile, takeWhile, ofNullable. The iterate method gets a new overload, which helps in providing a Predicate on when to stop iterating.

The takeWhile method takes a predicate as an argument and returns a Stream of subset of the given Stream values until that Predicate returns false for first time. If first value does NOT satisfy that Predicate, it just returns an empty Stream.

In this post I have tried to give a glimpse of some of the newly introduced features in Java 9. Trust me, we have just scratched the surface and there are many more useful and diverse features which would be available with Java 9. I am quite excited about this. What about you?

Evolution of Java Security Architecture

Java platform had a strong emphasis on security from the very beginning.  The multi-platform support has made people sceptical about the way Java handles security aspects and if there are some hidden risks. Java is a type-safe language and provides automatic garbage collection. The use of secure class loading and verification mechanism allows only legitimate Java code to be executed. The initial version of Java platform was focused specifically on providing a safe environment for running potentially untrusted code, such as Java applets. With the growth in the platform and wide spread deployment, the Java security architecture has evolved to support the expanding set of services. Over the years, the architecture has grown to include a large set of APIs, tools and implementations of security algorithms, mechanisms and protocols. Java security revolves around two key aspects:

  • Secure platform to run applications on.
  • Tools and services available in the Java programming language that facilitate a range of security sensitive applications

Language Level Security

There are multiple mechanisms inbuilt within the Java language to ensure secure programming:

  • Strictly Object-Oriented – All security related advantages of object oriented paradigm can be availed of as there can be no structures outside the boundary of a class.
  • Final Classes and Methods – Undesired modification of functionality can be prevented by making classes and methods as final.
  • Automated Memory Management – There are fewer chances of errors related to memory access as Java language does all the memory management automatically.
  • Automatic Initialization – All memory constructs on heap are automatically initialized. However, all stack based memory is not pre-initialized. This makes sure that all classes and instances are never set undefined values.

Basic Security Architecture

There are a set of APIs spanning all the major security areas like cryptography, public key infrastructure, authentication, secure communication and access control. These APIs help in easy integration of security in application code. They are designed around the following principles:

  • Implementation Independence – Application need not worry about the finer details of security and can request security from the Java platform. These security services are implemented in providers which get plugged into the Java platform via a standard interface. An application may leverage multiple independent providers for security.
  • Implementation Interoperability – By their very nature, providers are interoperable across applications. Just like an application is not bound to a specific provider, similarly a provider is not bound to any specific application.
  • Algorithm Extensibility – There are a number of built-in providers that implement a basic set of security services. There is a provision for installing custom providers which address specific needs of applications.

Security Providers

The concept of security provider is encapsulated by java.security.Provider class. It specifies the provider’s name and lists the security services it supports.  It is possible to configure multiple providers at the same time and can be listed in order of preference. When a security service is requested, the provider with the highest priority is selected.  Message digest creation is one type of service available from providers. An application can invoke getInstance method in the java.securityy.MessageDigest class to obtain an implementation of a specific message digest algorithm like MD5.

MessageDisgest md = MessageDigest.getInstance(“MD”);

Optionally, the provider name can be specified while requesting an implementation.

MessageDigest md = MessageDigest.getInstance(“MD5”, “ProviderA”);

Following figures illustrate these options for requesting an MD5 message digest implementation. Both the instances depict three providers that implement message digest algorithm. The providers are ordered by preference from left to right. In the first instance, an application can be seen requesting MD5 algorithm without specifying a provider name. The providers are looked for in preference order and the implementation from the first provider supplying that particular algorithm, ProviderB is returned. In second instance, the application requests the MD5 algorithm implementation from a specific provider, ProviderC. In this case, the implementation from that provider is returned, even though a provider with a higher preference order, ProviderB also supplies an MD5 implementation.

Figure – 1 – Provider Selection

Cryptography

Java provides a cryptography framework for accessing and developing cryptographic functionality for the Java platform. It includes APIs for a large variety of cryptographic services like Message digest algorithms, Digital signature algorithms, Symmetric stream encryption, Asymmetric encryption, etc. The cryptographic interfaces are provider-based which allows for multiple and interoperable cryptography implementations.

Public Key Infrastructure

The Public Key Infrastructure (PKI) framework enables secure exchange of information based on public key cryptography. It facilitates identities (of people, organizations, etc.) to be bound to digital certificates and provides a means of verifying the authenticity of certificates. PKI comprises of keys, certificates, public key encryption, and trusted Certification Authorities (CAs) who generate and digitally sign certificates.

Authentication

Authentication helps in identifying the user of an executing Java program. The Java platform provides APIs to perform user authentication via pluggable login modules.

Secure Communication

It is quite important to ensure that the data that travels across a network is not accessed by an unintended recipient. In case the data includes private information, like passwords and credit card numbers, steps must be taken to make the data unintelligible to unauthorized parties. The Java platform provides API support and provider implementations for a number of standard secure communication protocols like SSL/TLS, SASL, GSS-API and Kerberos.

Access Control

The access control architecture protects access to sensitive resources (for example, local files) or sensitive application code (for example, methods in a class). All access control decisions are mediated by a security manager, represented by the java.lang.SecurityManager class.  A  SecurityManager must be installed into the Java runtime in order to activate the access control checks.

Enhancements in JDK 8

JDK 8 has wide range of security related enhancements and features. Some of those are listed below:

TLS 1.1 and TLS 1.2 enabled by default

TLS 1.1 and TLS 1.2 are enabled by default on the client side by the SUNJSSE provider. You can enable SunJSSE protocols by using the new system property jdk.tls.client.protocols.

Limited doPrivileged

A new version of the method AccessController.doPrivileged can be used to assert a subset of its privileges, without preventing the full traversal of the stack to check for other permissions.

Stronger Algorithms for Password-Based Encryption

Many AES-based Password-Based Encryption (PBE) algorithms, such as PBEWithSHA256AndAES_128 and PBEWithSHA512AndAES_256, have been added to the SunJCE provider.

SSL/TLS Server Name Indication (SNI) Extension Support in JSSE Server

The SNI extension extends the SSL/TLS protocols to indicate what server name the client is attempting to connect to during handshaking. Servers can use server name indication information to decide if specific SSLSocket or SSLEngine instances should accept a connection. JDK 7 already has SNI extension enabled by default. JDK 8 supports the SNI extension for server applications.

KeyStore Enhancements

A new command option – importpassword facilitates accepting a password and store it securely as a secret key. A new class, java.security.DomainLoadStoreParameter is added to support DKS keystore type.

SHA-224 Message Digests

JDK 8 has enhanced cryptographic algorithms with the SHA-224 variant of the SHA-2 family of message-digest implementations.

Improved Support for High Entropy Random Number Generation

The SecureRandom class helps in generating cryptographically strong random numbers used for private or public keys, ciphers, signed messages, and so on.

64-bit PKCS11 for Windows

The PKCS 11 provider support for Windows has been expanded to include 64-bit.

Weak Encryption Disabled by Default

The DES-related Kerberos 5 encryption types are not supported by default. These encryption types can be enabled by adding allow_wbeak_crypto=true in the krb5.conf file, but DES-related encryption types are considered highly insecure and should be avoided.

Unbound SASL for the GSS-API/Kerberos 5 mechanism

The  Krb5LoginModule principal value in a JAAS configuration file can be set to asterisk (*) on the acceptor side to denote an unbound acceptor. This means that the initiator can access the server using any service principal name if the acceptor has the long term secret keys to that service. The name can be retrieved by the acceptor using the GSSContext.getTargName() method after the context is established.

SASL service for multiple host names

While creating a SASL server, the server name can be set to null to denote an unbound server, which means a client can request for the service using any server name.

Java 8 has taken a definitive step in making Java more secure by introducing these new security enhancements.

Microservices Asynchronous Communication

Communication among microservices can be either synchronous or asynchronous. Each mode has its own pros and cons. In this post, I would like to share the mechanism of asynchronous communication. As an example, lets consider MyKart, which is an online shopping portal. Just like a typical E-Commerce site, customers (users) can login, browse through product categories, select products and order after paying online. The service architecture is based on micro-service architecture and the application is being built on top of Java stack with the node.js/java script based UI.

Lets see how RabbitMQ can be used for asynchronous communication amongst the microservices.

Service Interaction

The interaction amongst the different services is depicted below:

SequenceDiagram

A typical use case for ordering products is:

  1. Customer (user) logs into the MyKart portal. This is facilitated by the Order service. For existing customers, the details are fetched from Customer service. For new users, a new customer account is created.
  2. Customer can scan through all the different categories of products. The Order service queries the Catalog service to get the details of all the products.
  3. Customer selects the products which he/she wants to purchase. The Billing service is used for online transaction. A payment gateway is used to pay for the purchased products.
  4. Order service maintains records of purchased products and notifies the Shipment service of the transaction. The shipment service will further inform logistics team to prepare for shipping of the purchased products.

Service Interaction and Communication

In order to help with scalability, bulk of the interaction amongst the services would be through asynchronous communication. This would be facilitated by RabbitMQ messaging system. It would help to mention here key concepts of RabbitMQ.

RabbitMQ Fundamentals

The three parts that route messages in RabbitMQ’s are exchanges, queues, and bindings. Applications direct messages to exchanges. Exchanges route those messages to queues based on bindings. Bindings tie queues to exchanges and define a key (or topic) which it binds with.

Exchange

Besides the exchange type, exchanges are declared with a number of attributes, the most important of which are:

  • Name
  • Durability (exchanges survive broker restart)
  • Auto-delete (exchange is deleted when all queues have finished using it)
  • Arguments (these are broker-dependent)

Exchanges can be durable or transient. Durable exchanges survive broker restart whereas transient exchanges do not. Different types of exchanges are as following:

  • Default Exchange – The default exchange is a direct exchange with no name (empty string) pre-declared by the broker. It has one special property that makes it very useful for simple applications: every queue that is created is automatically bound to it with a routing key which is the same as the queue name.
  • Direct Exchange – A direct exchange delivers messages to queues based on the message routing key. A direct exchange is ideal for the unicast routing of messages.
  • Fanout Exchange – A fanout exchange routes messages to all of the queues that are bound to it and the routing key is ignored. If N queues are bound to a fanout exchange, when a new message is published to that exchange a copy of the message is delivered to all N queues. Fanout exchanges are ideal for the broadcast routing of messages.
  • Topic Exchange – Topic exchanges route messages to one or many queues based on matching between a message routing key and the pattern that was used to bind a queue to an exchange. The topic exchange type is often used to implement various publish/subscribe pattern variations. Topic exchanges are commonly used for the multicast routing of messages.
  • Headers Exchange – A headers exchange is designed for routing on multiple attributes that are more easily expressed as message headers than a routing key. Headers exchanges ignore the routing key attribute. Instead, the attributes used for routing are taken from the headers attribute. A message is considered matching if the value of the header equals the value specified upon binding.

Queues

Queues store messages that are consumed by applications. The key properties of queues are as following:

  • Queue Name – Applications may pick queue names or ask the broker to generate a name for them.
  • Queue Durability – Durable queues are persisted to disk and thus survive broker restarts. Queues that are not durable are called transient. Not all scenarios and use cases mandate queues to be durable.
  • Bindings – These are rules that exchanges use (among other things) to route messages to queues. To instruct an exchange E to route messages to a queue Q, Q has to be bound to E. Bindings may have an optional routing key attribute used by some exchange types.

Message Attributes

Messages have a payload and attributes. Some of the key attributes are listed below:

  • Content type
  • Content encoding
  • Routing key
  • Delivery mode (persistent or not)
  • Message priority
  • Message publishing timestamp
  • Expiration period
  • Publisher application id

Message Acknowledgements – We know that networks can be unreliable and this might lead to failure of applications. So it is often necessary to have some kind of processing acknowledgement. In some cases it is only necessary to acknowledge the fact that a message has been received. In other cases acknowledgements mean that a message was validated and processed by a consumer.

Service Communication

All the services of MyKart would utilize RabbitMQ infrastructure for communicating with each other. This is depicted in the following diagram:

RabbitMQ-Comm

When different services want to implement request-response communication using RabbitMQ, there are two key patterns:

  • Exclusive reply queue per request – In this case each request creates a reply queue. The benefits are that it is simple to implement. There is no problem with correlating the response with the request, since each request has its own response consumer. If the connection between the client and the broker fails before a response is received, the broker will dispose of any remaining reply queues and the response message will be lost. Key consideration to take care is that we need to clean up any replies queues in the event that a problem with the server means that it never publishes the response. This pattern has a performance cost because a new queue and consumer has to be created for each request.
  • Exclusive reply queue per client – In this case each client connection maintains a reply queue which many requests can share. This avoids the performance cost of creating a queue and consumer per request, but adds the overhead that the client needs to keep track of the reply queue and match up responses with their respective requests. The standard way of doing this is with a correlation id that is copied by the server from the request to the response.

Durable reply queue – In both the above patterns, the response message can be lost if the connection between the client and broker goes down while the response is in flight. This is because they use exclusive queues that are deleted by the broker when the connection that owns them is closed. This can be avoided by using a non-exclusive reply queue. However this creates some management overhead. One needs some way to name the reply queue and associate it with a particular client. The problem is that it’s difficult for the client to know if any one reply queue belongs to itself, or to another instance. It’s easy to naively create a situation where responses are being delivered to the wrong instance of the client. In the worst case, we would have to manually create and name response queues, which remove one of the main benefits of choosing broker based messaging in the first place.

Rather than using the exclusive reply queue per request, I would recommend to use exclusive reply queue per client. Each service would have its own exchange of type direct where other services would send request events. For response events, the default exchange would be leveraged. All services would have a response queue bound to default exchange corresponding to each service from which response is expected.

Request – Response Event
The request and response messages would be send as an event structure. They would contain the following information:

  1. Correlation Id
    In order to co-relate response event with request event, unique request correlation id would be used. This would help in uniquely identifying a request and tying it up with response. One way of generating unique id is to leverage UUID class of Java. See the code snippet below:
    UUID.randomUUID().toString();
    There can be other mechanisms also for unique id generation, but in my view this is most suitable and user-friendly.
  2. Payload
    The request information and response details would be serialized as a JSON string. While sending a request, the request object details would be converted into JSON format. Similarly, while receiving response, the JSON information would be used to populate the actual response object. The third party Jackson utility would be leveraged for object to JSON conversion and vice-versa.

Service Interaction
Let us take an example to see how the different services would leverage the messaging infrastructure for interaction. Let us take the use case when a customer logs in to the Order service and it then fetches customer information.

  1. Order service needs to get customer information, like name, address, etc. and for this it has to query the Customer service.
  2. The request object contains the customer id. It is converted into JSON format using Jackson.
  3. Java UUID class is used to generate the correlation id.
  4. The message is send to Customer exchange with customer.read as the routing key. The CustomerReadQueue is bound to Customer exchange with this key and so the message gets delivered to CustomerReadQueue.
  5. The reply_to field is used to indicate on which particular queue the response event should be send. In this case, the reply_to field is filled with the value CustomerResponseQueue.
  6. Once the Customer service is able to fetch the desired information, it converts it into JSON format. It then populates the Correlation Id received earlier. The response event is then send to CustomerResponseQueue, which is bound to the default exchange.
  7. The Order service is waiting for response messages on CustomerResponseQueue. Once it receives an event it uses the JSON payload to populate the Response object. The Correlation Id is used to map it to the request object send earlier.

Service Notification
Besides, the regular request-response interaction, the messaging infrastructure can also be used for one way notifications. Any good application should always log important information and also raise metrics/alarms for the same. All the services of MyKart would send important notifications to Logging Service and Metrics service. The Logging service would use these events to log information in logging infrastructure. The Metrics service would use this information to raise metrics and alarms. These metrics are helpful in serviceability of microservices.

For MyKart, there is Fanout exchange by the name of Notification exchange. It would send the received information to all the registered queues. Both the Logging service and Metrics service have their queues bound to this exchange. Events for Logging service are received on LoggingQueue and the events for Metrics service are received on MetricsQueue.

Conclusion

As has been demonstrated in the shopping cart example, RabbitMQ can be used effectively for asynchronous communication among different services.

Java 8 Lambda Expression for Design Patterns – Strategy Design Pattern

The strategy pattern defines a family of algorithms encapsulated in a driver class usually known as Context and enables the algorithms to be interchangeable. It makes the algorithms easily interchangeable, and provides mechanism to choose the appropriate algorithm at a particular time.

The algorithms (strategies) are chosen at runtime either by a Client or by the Context. The Context class handles all the data during the interaction with the client.

The key participants of the Strategy pattern are represented below:

strategy.png
  • Strategy – Specifies the interface for all algorithms. This interface is used to invoke the algorithms defined by a ConcreteStrategy.
  • Context – Maintains a reference to a Strategy object.
  • ConcreteStrategy – Actual implementation of the algorithm as per Strategy interface

Now let’s look at a concrete example of the strategy pattern and see how it gets transformed with lambda expressions. Suppose we have different type of rates for calculating income tax. Based on whether tax is paid in advance or late, there is a rebate or penalty, respectively. We can encapsulate this functionality in the same class as different methods but it would need modification to the class if some other tax calculation is required in future. This is not an efficient approach. Changes in implementation of a class should be the last resort.

Let’s take an optimal approach by using Strategy pattern. We will make an interface for Tax Strategy with a basic method:

public interface TaxStrategy {

	public double calculateTax(double income);
}

Now let’s define the concrete strategy for normal income tax.

public class PersonalTaxStrategy implements TaxStrategy {

	public PersonalTaxStrategy() { }

	@Override
	public double calculateTax(double income) {

		System.out.println("PersonalTax");

		double tax = income * 0.3;
		return tax;
	}
}

The PersonalTaxStrategy class conforms to the TaxStrategy interface. Similarly, let’s define a concrete strategy for late tax payment which incurs a penalty.

public class PersonalTaxPenaltyStrategy implements TaxStrategy {

	public PersonalTaxPenaltyStrategy() { }

	@Override
	public double calculateTax(double income) {

		System.out.println("PersonalTaxWithPenalty");

		double tax = income * 0.4;
		return tax;
	}
}

Next lets define a concrete strategy for advance tax payment which results in tax rebate.

public class PersonalTaxRebateStrategy implements TaxStrategy {

	public PersonalTaxRebateStrategy() { }

	@Override
	public double calculateTax(double income) {

		System.out.println("PersonalTaxWithRebate");

		double tax = income * 0.2;
		return tax;
	}
}

Now let’s combine all classes and interfaces defined to leverage the power of Strategy pattern. Let the main method act as Context for the different strategies. See just one sample interplay of all these classes:

import java.util.Arrays;
import java.util.List;

public class TaxStrategyMain {

	public static void main(String [] args) {

		//Create a List of Tax strategies for different scenarios
		List<TaxStrategy> taxStrategyList =
				Arrays.asList(
						new PersonalTaxStrategy(),
						new PersonalTaxPenaltyStrategy(),
						new PersonalTaxRebateStrategy());

		//Calculate Tax for different scenarios with corresponding strategies
		for (TaxStrategy taxStrategy : taxStrategyList) {
			System.out.println(taxStrategy.calculateTax(30000.0));
		}
	}
}

Running this gives the following output:

PersonalTax
9000.0
PersonalTaxWithPenalty
12000.0
PersonalTaxWithRebate
6000.0

 

It clearly demonstrates how different tax rates can be calculated by using appropriate concrete strategy class. I have tried to combine all the concrete strategy (algorithms) in a list and then access them by iterating over the list.

What we have seen till now is just the standard strategy pattern and it’s been around for a long time. In these times when functional programming is the new buzzword one may ponder with the support of lambda expressions in Java, can things be done differently? Indeed, since the strategy interface is like a functional interface, we can rehash using lambda expressions in Java. Let’s see how the code looks like:

import java.util.Arrays;
import java.util.List;

public class TaxStrategyMainWithLambda {

	public static void main(String [] args) {

		//Create a List of Tax strategies for different scenarios with inline logic using Lambda
		List<TaxStrategy> taxStrategyList =
				Arrays.asList(
						(income) -> { System.out.println("PersonalTax"); return 0.30 * income; },
						(income) -> { System.out.println("PersonalTaxWithPenalty"); return 0.40 * income; },
						(income) -> { System.out.println("PersonalTaxWithRebate"); return 0.20 * income; }
			);

		//Calculate Tax for different scenarios with corresponding strategies
		taxStrategyList.forEach((strategy) -> System.out.println(strategy.calculateTax(30000.0)));
	}
}

Running this gives the similar output:

PersonalTax
9000.0
PersonalTaxWithPenalty
12000.0
PersonalTaxWithRebate
6000.0

 

We can see that use of lambda expressions, makes the additional classes for concrete strategies redundant. You don’t need additional classes; simply specify additional behavior using lambda expression.

All the code snippets can be accessed from my github repo

Java 8 Lambda Expression for Design Patterns – Decorator Design Pattern

The Decorator pattern (also known as Wrapper) allows behavior to be added to an individual object, either statically or dynamically, without affecting the behavior of other objects from the same class. It can be considered as an alternative to subclassing. We know that subclassing adds behavior at compile time and the change affects all instances of the original class. On the other hand, decorating can provide new behavior at run-time for selective objects.

The decorator conforms to the interface of the component it decorates so that it is transparent to the component’s clients. The decorator forwards requests to the component and may perform additional actions before or after forwarding. Transparency allows decorators to be nested recursively, thereby allowing an unlimited number of added responsibilities. The key participants of the Decorator pattern are represented below:

decorator

  • Component – Specifies the interface for objects that can have responsibilities added to them dynamically.
  • ConcreteComponent – Defines an object to which additional responsibilities can be added
  • Decorator – Keeps a reference to a Component object and conforms to Component’s interface. It contains the Component object to be decorated.
  • ConcreteDecorator – Adds responsibility to the component.

Now let’s look at a concrete example of the decorator pattern and see how it gets transformed with lambda expressions. Suppose we have different types of books and these books may be in different covers or different categories. We can chose any book and specify category or language by using inheritance. Book can be abstracted as a class. After that any other class can extend Book class and override the methods for cover or category. But this is not an efficient approach. Under this approach, the sub-classes may have unnecessary methods extended from super class. Also if we had to add more attributes or characterization there would be a change in parent class. Changes in implementation of a class should be the last resort.

Let’s take an optimal approach by using Decorator pattern. We will make an interface for Book with a basic method:

public interface Book {

    public String describe();

}

A BasicBook class can implement this interface to provide minimal abstraction:

public class BasicBook implements Book {

    @Override
    public String describe() {

        return "Book";

    }

}

Next, lets define the abstract class BookDecorator which will act as the Decorator:

abstract class BookDecorator implements Book {

    protected Book book;

    BookDecorator(Book book) {
        this.book = book;
    }

    @Override
    public String describe() {
        return book.describe();
    }
}

The BookDecorator class conforms to the Book interface and also stores a reference to Book interface. If we want to add category as a property to Book interface, we can use a concrete class which implements BookDecorator interface. For Fiction category we can have the following Decorator:

public class FictionBookDecorator extends BookDecorator {

    FictionBookDecorator(Book book) {
        super(book);
    }

    @Override
    public String describe() {
        return ("Fiction " + super.describe());
    }
}

You can see that FictionBookDecorator adds the category of book in the original operation, i.e. describe. Similarly, if we want to specify Science category we can have the corresponding ScienceBookDecorator:

public class ScienceBookDecorator extends BookDecorator {

    ScienceBookDecorator(Book book) {
        super(book);
    }

    @Override
    public String describe() {
        return ("Science " + super.describe());
    }
}

ScienceBookDecorator also adds the category of book in the original operation. There can also be another set of Decorators which indicate what type of cover the book has. We can have SoftCoverDecorator describing that the book has a soft cover.

public class SoftCoverDecorator extends BookDecorator {

	SoftCoverDecorator(Book book) {
		super(book);
	}
	
	@Override
	public String describe() {	
		return (super.describe() + " with Soft Cover");
	}
}

We can also have a HardCoverDecorator describing the book has a hard cover.

public class HardCoverDecorator extends BookDecorator {
	
	HardCoverDecorator(Book book) {
		super(book);
	}
	
	@Override
	public String describe() {	
		return (super.describe() + " with Hard Cover");
	}
}

Now let’s combine all classes and interfaces defined to leverage the power of Decorator pattern. See just one sample interplay of all these classes:

import java.util.List;
import java.util.ArrayList;

public class BookDescriptionMain {
	
	public static void main(String [] args) {
		
		BasicBook book = new BasicBook();
		
		//Specify book as Fiction category
		FictionBookDecorator fictionBook = new FictionBookDecorator(book);
		
		//Specify that the book has a hard cover
		HardCoverDecorator hardCoverBook = new HardCoverDecorator(book);
		
		//What if we want to specify both the category and cover type together
		HardCoverDecorator hardCoverFictionBook = new HardCoverDecorator(fictionBook);				
		
		//Specify book as Science category
		ScienceBookDecorator scienceBook = new ScienceBookDecorator(book);
		
		//What if we want to specify both the category and cover type together
		HardCoverDecorator hardCoverScienceBook = new HardCoverDecorator(scienceBook);				

		//Add all the decorated book items in a list
		List<Book> bookList = new ArrayList<Book>() {
			{
				add(book);
				add(fictionBook);
				add(hardCoverBook);
				add(hardCoverFictionBook);
				add(scienceBook);
				add(hardCoverScienceBook);
			}
		};
		
		//Traverse the list to access all the book items
		for(Book b: bookList) {
			System.out.println(b.describe());
		}		
	}
}

Running this gives the following output:

Book
Fiction Book
Book with Hard Cover
Fiction Book with Hard Cover
Science Book
Science Book with Hard Cover

It clearly demonstrates how different properties can be added to any pre-defined class / object. Also, multiple properties can be combined. I have tried to combine all the decorated book items in a list and then access them by iterating over the list.

What we have seen till now is just the standard decorator pattern and it’s been around for a long time now. In these times when functional programming is the new buzzword one may ponder whether the support of lambda expressions in Java, can things be done differently. Indeed, since the decorated interface is like a function interface, we can rehash using lambda expressions in Java. Lets see how the code looks like:

import java.util.List;
import java.util.ArrayList;

public class BookDescriptionMainWithLambda {
	
	public static void main(String [] args) {
		
		BasicBook book = new BasicBook();
		
		//Specify book as Fiction category using Lambda expression
		Book fictionBook = () -> "Fiction " + book.describe();
		
		//Specify that the book has a hard cover using Lambda expression
		Book hardCoverBook = () -> book.describe() + " with Hard Cover";
		
		//What if we want to specify both the category and cover type together
		Book hardCoverFictionBook = () -> fictionBook.describe() + " with Hard Cover";				
		
		//Specify book as Science category using Lambda expression
		Book scienceBook = () -> "Science " + book.describe();
		
		//What if we want to specify both the category and cover type together
		Book hardCoverScienceBook = () -> fictionBook.describe() + " with Hard Cover";				

		List<Book> bookList = new ArrayList<Book>() {
			{
				add(book);
				add(fictionBook);
				add(hardCoverBook);
				add(hardCoverFictionBook);
				add(scienceBook);
				add(hardCoverScienceBook);
			}
		};
		
		bookList.forEach(b -> System.out.println(b.describe()));
	}
}

Running this gives the similar output:

Book
Fiction Book
Book with Hard Cover
Fiction Book with Hard Cover
Science Book
Fiction Book with Hard Cover

We can see that use of lambda expressions, makes the additional classes for decorators redundant. You don’t need additional classes; simply specify additional behavior using lambda expression. However, there is support for finding the decorator again for re-use. If you have a concrete decorator class, you can reuse it next time also.

All the code snippets can be accessed from my github repo

Book Review – OCP Java SE 7 Programmer II Certification Guide: Prepare for the 1ZO-804 exam

Although this book is essentially for preparing for OCP certification but I feel it is a great reference for anyone interested in learning Java 7 features. Oracle has been adding tons of features with each Java release and it is no mean task to keep up with them. Author has done a great job in providing adequate treatment to all topics. I thoroughly enjoyed reading this book.

High Points:

  • Each chapter has well defined mapping with exam objectives, exam tips, sidebars and Twist in Tale
  • Interesting illustrations which keep the reader focused like Thread Life cycle.
  • Funny way to introduce concepts like overloading, exceptions. I loved the airplane example for exceptions.
  • Inner details explained with decompiled code (For e.g. Enum)
  • Good introduction to design patterns like Singleton, Factory, DAO, etc.
  • Lucid explanation for Collections with interesting examples.
  • Novel way to explain difference between Assertions and Exceptions.
  • Review notes and sample exam questions at the end of each chapter.

Improvement Areas:

  • Compiler errors can be in a different color.
  • I think Page 249 has a typo error. The phrase “Won’t compile; no way to pass argument to T” should be “Won’t compile; no way to pass argument T”
  • More emphasis should be given to StringBuffer and StringBuilder as developers typically get confused in them.
  • Exception Handling – More emphasis on why we should not catch Errors. Moreover, I don’t think extending RuntimeException should be encouraged.
  • Few examples for Java I/O.
  • Concurrency is not an easy area so more detailed treatment with sufficient examples should be given.
  • A consolidated mock paper in the end would help.

I feel this book would be good reference for preparing for OCP certification.

This review originally appeared at Amazon listing of the book.

Java 8 Lambda Expression for Design Patterns – Command Design Pattern

In this blog I would illustrate implementing the command pattern in functional programming style using Java 8 Lambda expressions. The intent of command pattern is to encapsulate a request as an object, thereby parameterizing clients with different requests, queue or log requests, and support corresponding operations. The command pattern is a way of writing generic code that sequences and executes methods based on run-time decisions. The participants in this pattern are following:

  • Command – Declares an interface for executing an operation.
  • ConcreteCommand – Defines a binding between a Receiver object and an action.
  • Client – Creates a ConcreteCommand instance and sets its receiver.
  • Invoker – Controls the command(s) to carry out the request(s).
  • Receiver – Performs the actual work.

The relationship between these participants is depicted below:

CommandPattern

Let’s look at a concrete example of the command pattern and see how it gets transformed with lambda expressions. Suppose we have a file system utility that has actions upon it that we’ll be calling, such as opening a file, writing to the file and closing the file. This can be implemented as macro functionality – that is, a series of operations that can be recorded and then run later as a single operation. This would be our receiver.

public interface FileSystemReceiver {
	void openFile();
	void writeFile();
        void closeFile();
}

Each of the operations, such as openFile and writefile, are commands. We can create a generic command interface to fit these different operations into. Let’s call this interface Action, as it represents performing a single action within our domain. This is the interface that all our command objects implement.

public interface Action {
    public void perform();
}

Let us now implement our Action interface for each of the operations. All these classes need to do is call a single method on FileReceiver and wrap this call into our Action interface. Lets name the classes after the operations that they wrap, with the appropriate class naming convention – so, the openFile method corresponds to a class called OpenFile.

public class OpenFile implements Action {

    private final FileReceiver fileReceiver;

    public OpenFile(FileReceiver fileReceiver) {
        this.fileReceiver = fileReceiver;
    }

    public void perform() {
        fileReceiver.openFile();
    }

}

Now let’s implement our Macro class. A macro consists of a sequence of actions that can be invoked in turn and this will act as invoker. This class can record actions and run them collectively. We can store the sequence of actions in a List then iteratively fetch each action in order to execute.

public class Macro {
    private final List actions;

    public Macro() {
        actions = new ArrayList<>();
    }

    public void record(Action action) {
        actions.add(action);
    }

    public void run() {
        actions.forEach(Action::perform);
    }
}

While populating the macros, we can add instance of each command that has been recorded to the Macro object. Now simply running the macro will call each of the commands in turn. This is our client code.


Macro macro = new Macro();
macro.record(new OpenFile(fileReceiver));
macro.record(new WriteFile(fileReceiver));
macro.record(new CloseFile(fileReceiver));
macro.run();

If you have been with me till this point, you would be wondering where lambda expressions fit in all this. Actually, all our command classes, such as OpenFile, WriteFile and CloseFile, are really just lambda expressions wanting to break out of their wrappers. They are just some behavior that is being passed around as classes. This entire pattern becomes a lot simpler with lambda expressions because we can completely do away with these classes. Let’s see how Macro class (client) can use lambda expressions instead of command classes.


Macro macro = new Macro();
macro.record(() -> fileReceiver.openFile());
macro.record(() -> fileReceiver.writeFile());
macro.record(() -> fileReceiver.closeFile());
macro.run();

This can be further improved by taking cognizance of the fact that each of these lambda expressions is performing a single method call. So, method references can be directly used.


Macro macro = new Macro();
macro.record(fileReceiver::openFile);
macro.record(fileReceiver::writeFile);
macro.record(fileReceiver::closeFile);
macro.run();

Command pattern is easily extendable and new action methods can be added in receivers to create new Command implementations without changing the client code. Runnable interface (java.lang.Runnable) in JDK is a popular interface where the Command pattern is used. In this blog, I have tried to express command pattern in Java 8 lambda expression. You would have seen by using lambda expressions, a lot less boilerplate is required leading to cleaner code.

Microservices in a Nutshell

At the onset, “Microservices” sounds like yet another term on the long list of software architecture styles. Many experts consider it as lightweight or fine-grained SOA. Though it is not entirely a new idea, Microservices seem to have peaked in popularity in recent years, what with numerous blogs, articles and conferences evangelizing the benefits of building software systems in this style. The coming together of trends like Cloud, DevOps and Continuous Delivery has contributed to this popularity. Netflix, in particular, has been the poster boy of Microservices architectural model.

Nevertheless, Microservices deserves its place as a separate architectural style which while leveraging from earlier styles has a certain novelty. The fundamental idea behind Microservices is to develop systems as set of fine grained, independent and collaborating services which run in their own process space and communicate with lightweight mechanisms like HTTP. The stress is on services to be small and they can evolve over time.

In order to better understand the concept of microservices, let’s start by comparing them with traditional monolithic systems. Let’s consider an E-commerce store which sells products online. Customers can browse through the catalog for the product of their choice and give orders. The store verifies the inventory, takes the payment and then ships the order. The figure below demonstrates a sample E-commerce store.

monolith

The online store consists of many components like the UI which helps customers with the interaction. Besides, the UI there are other components for organizing the product catalog, order processing, taking care of any deals or discounts and managing the customer’s account. The business domain shares resources like Product, Order and Deals which are persisted in a Relational Database. The customers may access the store using a desktop browser or a mobile device.

Despite being a modularly designed system, the application is still monolithic. In the Java world the application would be deployed as a WAR file on a Web Server like Tomcat. The monolithic architecture has its own benefits:

  • Simple to develop as most of the IDEs and development tools are tailored for developing monolithic applications.
  • Ease of testing as only one application has to be launched.
  • Deployment friendly as in most cases a single file or directory has to be deployed on a server.

Till the monolithic application is simple everything seems normal from the outside. However, when the application starts getting complex in terms of scale or functionality, the handicaps become apparent. Some of the undesired shortcomings of a monolithic application are as following:

  • A large monolithic system becomes unmanageable when it comes to bug fixes or new features.
  • Even a single line of code change leads to a significant cycle time for deploying on production environment. The entire application has to be rebuilt and redeployed even if one module gets changed.
  • The entry barrier is high for trial and adoption of new technologies. It is difficult for different modules to use different technologies.
  • It is not possible to scale different modules differently. This can be a huge headache as monolithic architectures don’t scale to support large, long lived applications.

This is where the book, “The Art of Scalability” comes as a big help. It describes other architectural styles that do scale. It describes scalability model on the basis of a three dimensional scale cube.

scalability-axisThe X-axis scaling is the easiest way of scaling an application. It requires running multiple identical copies of the application behind a load balancer. In many cases it can provide good improvement in capacity and availability of an application without any refactoring.

In case of Z-axis scaling each server runs an identical copy of application. It sounds similar to X-axis scaling but the crucial difference is that each server is responsible for only a subset of the data. Requests are routed to appropriate server based on some intelligent mechanism. A common way is to route on the basis of the primary key of the entity being accessed, i.e. sharding. This technique for scaling is similar to X-axis scaling as it improves the application’s capacity and availability. However, none of these approaches does a good job of easing the development and application complexity. This is where the Y-axis scaling helps.

Y-axis scaling or functional decomposition is the 3rd dimension to scaling. We have seen that Z-axis splits things that are similar. On the other hand, Y-axis scaling splits things that are different. It divides a monolithic application into a set of services. Each service corresponds to a related functionality like catalog, order management, etc. How to split a system into services? It is not as easy as it appears to be. One way could be to split by verb or use case. For e.g., UI for compare products use case could be a separate service.

Another approach could be to split on the basis of nouns or system resources. This service takes ownership of all operations of an entity/resource. For e.g., there could be a separate catalog service which manages the catalog of products.

This leads to another interesting question. How big should a service really be? One school of thought recommends each service should have only a small set of responsibilities. As per Uncle Martin, services should be designed using the Single Responsibility Principle (SRP). The SRP suggests that a class should only have one reason to change. It makes great sense to apply the SRP to service design as well.

Another way of addressing this is to design services just like Unix utilities. Unix provides a large number of utilities such as grep, cat, ls and find. Each utility does exactly one thing, often exceptionally well, and can be combined with other utilities using a shell script to perform complex tasks. While designing services, try to model them on Unix utilities so that they perform a single function.

Some people argue that the lines of code (10 – 100 LOC) could be a benchmark for service size. I am not sure if it is a good talisman, what with many languages being more verbose than others. For e.g., Java is quite verbose as compared to Scala. An interesting take is that a microservice size should be such that an entire microservice can be developed from scratch by a scrum team in one sprint.

In all these interesting and diverse viewpoints, it should be remembered that the goal is to create a scalable system without any problems associated with monolithic architecture. It is perfectly fine to have some services which are very tiny while others are substantially larger.

On applying the Y-axis decomposition to the example monolithic application, we would get the following microservices based system:

servicesAs you can see after splitting the monolithic architecture, there are different frontend services addressing a specific UI need and interacting with multiple backend services. One such example is of the Catalog UI service which communicates with both the Catalog service and Product service. The backend services encompass the same logical modules which we had seen earlier in the monolithic system.

Benefits of Microservices

The microservices based architecture has numerous benefits and some of them can be found below:

  • The small size of microservice leads to ease of development and maintenance.
  • Multiple developers and teams can deliver relatively independently of each other.
  • Each microservice can be deployed independently of other services. If there is a change in a particular service then only that service need be deployed. Other services are unaffected. This in turn leads to easy transition to continuous deployment.
  • This architectural pattern leads to truly scalable systems. Each service can be scaled independently of other services.
  • The microservice architecture helps in troubleshooting. For example, a memory leak in one service only affects that service. Other services will continue to run normally.
  • Microservice architecture leads to a true polyglot model of development. Different services can deploy different technology stacks as per their needs. This gives a lot of flexibility to experiment with various tools and technologies. The small nature of these services makes the task of rewriting the service possible without much effort.

Drawbacks of Microservices

No technology is a silver bullet and microservice architecture is no exception. Let’s look at some challenges with this model.

  • There is added complexity of dealing with a distributed system. Each service would be a separate process and inter-process communication mechanism is required for message exchange amongst services. There are lots of remote procedure calls, REST APIs or messaging to glue components together across different processes and servers. We have to consider various challenges like network latency, fault tolerance, message serialization, unreliable networks, asynchronicity, versioning, varying loads within our application tiers etc.
  • Microservices are more likely to communicate with each other in an asynchronous manner. This helps in decompose work into genuinely separate independent tasks which can happen out of order at different times. Things start getting complex with the need for managing correlation IDs and distributed transactions to tie various actions together.
  • Different test strategies are required as asynchronous communication and dynamic message loads make it much harder to test systems. Services have to be tested both in isolation and their interactions also have to be tested.
  • Since there are many moving parts, the complexity is shifted towards orchestration. To manage this a PaaS-like technology such as Pivotal Cloud Foundry is required.
  • Careful planning required for features spanning multiple services.
  • Substantial DevOps skills are required as different services may use different tools. The developers need to be involved in operations also.

Summary

I am sure with all the aspects mentioned above you must be wondering whether you should start splitting your legacy monolithic architectures into microservices or in fact start any fresh system with this architectural style. The answer as in most cases is – “It Depends!” There are some organizations like Netflix, Amazon, The Guardian, etc. who have pioneered this architectural style and demonstrated that it can give tremendous benefits.

Success with microservices depends a lot on how well you componentize your system. If the components do not compose cleanly, then all you are doing is shifting complexity from inside a component to the connections between components. Not only that, it also moves complexity to a place that’s less explicit and harder to control.

The success of using microservices depends on the complexity of the system you’re contemplating. This approach introduces its own set of complexities like continuous deployment, monitoring, dealing with failure, eventual consistency, and other factors that a distributed system introduces. These complexities can be handled but its extra effort. So, I would say if you can manage to keep your system simple, then you may not need microservices. If your system runs the risk of becoming a complex one then you can certainly think about microservices.

Java 8 – What’s it got for you

Java 8 was released in March 2014 and promises to be revolutionary in nature. It is packed of some really exciting features at both the JVM and language level. Java 8 encompasses both Java SE 8 and Java ME 8 and might just be the most significant expansion of the Java platform yet. Lambda expressions along with the Stream API increase the expressive power of the platform and make it easier to develop applications for modern, multicore processors. Compact Profiles in Java SE 8 is a key step towards convergence of Java SE and Java ME and allows developers to use just a subset of the platform. Java 8 facilitates similar skills to be used in diverse scenarios like embedded Internet of Things (IoT) devices or enterprise servers in the cloud.

Let’s now look at some key features of Java 8

Language Features

Lambda Expressions

The most dominant feature of Java 8 is the support for Lambda expressions (closures). This addition brings Java to the forefront of functional programming, along with other functional languages such as Scala and Clojure. Lambda expressions are anonymous methods that provide developers with a simple and compact means for representing behavior as data. This helps in development of libraries that abstract behavior leading to more-expressive, less error-prone code. Lambda expressions replace use of anonymous inner classes. Consider the example below:

processSomething(new Foo() {
     public boolean isSuitable(int value) {
     return value == 11;
   }
});

This now gets simplified to:

processSomething(result -> result == 11);

Functional Interfaces

Java 8 comes with functional interfaces which are defined as an interface with exactly one abstract method (SAM). This even applies to interfaces that were created with previous versions of Java. It can have other methods also but there can be only one abstract method.

There are several new functional interfaces introduced in the package java.util.function.

  • Function<T,R> – takes an object of type T and returns R.
  • Supplier – just returns an object of type T.
  • Predicate – returns a boolean value based on input of type T.
  • Consumer – performs an action with given object of type T.
  • BiFunction – like Function but with two parameters.
  • BiConsumer – like Consumer but with two parameters.

There are several interfaces for primitive types like:

  • IntConsumer
  • IntFunction
  • IntPredicate
  • IntSupplier

The most interesting part of functional interfaces is that they can be assigned to anything that would fulfill their contract. See the following code sample as an example:

Function<String, String> newAddress = (address) -> {return "@" + address;};
Function<String, Integer> size = (address) -> address.length();
Function<String, Integer> size2 = String::length;

The first line defines a function that adds “@” as prefix to a String. The last two lines define functions that do the similar thing – get the length of a String. Based on the context, the Java compiler converts the method reference to String’s length() method into a Function (functional interface) whose apply method takes a String and returns an Integer. For example:

for (String s : args) out.println(size2.apply(s));

This would print out the lengths of the given strings.

There can be even custom functional interfaces. Use the @FunctionalInterface annotation to ensure the contract is honored. There would be a compiler error if it is not a SAM.

Method References
Method references let us reuse a method as a lambda expression. This helps as a lambda expression is like an object-less method. For example, let’s say you want to find out file permissions. Assume you have the following set of methods for determining permissions:

public class FilePermissions {
     public static boolean fileIsReadOnly(File file) {/*code*/}
     public static boolean fileIsReadWriteTxt(File file) {/*code*/}
     public static boolean fileIsWriteOnly(File file) {/*code*/}
}

If you want to find out permissions for a list of files, you can use method reference (assuming you already defined a method getFiles() that returns a Stream):

Stream readfs = getFiles().filter(FilePermissions::fileIsReadOnly);
Stream readwritefs = getFiles().filter(FilePermissions::fileIsReadWrite);
Stream writefs = getFiles().filter(FilePermissions::fileIsWriteOnly);

Default Methods

In order to leverage functional interfaces and lambda expressions for collection framework, Java needed another new feature, Default methods (also known as Defender Methods or Virtual Extension methods). Default interface methods provide a mechanism to add new methods to existing interfaces, without breaking backwards compatibility. Default methods can be added to any interface. It gives Java multiple inheritance of behavior, as well as types. It may be noted that this is different from other languages like C++ which provide multiple inheritance of state. Any interface (even functional interface) can have default methods. If a class implements that interface but does not override the method it will get the default implementation.

As an example see the default method added in Collection interface.

public interface Set<T> extends Collection<T> {

    ... // The existing Set methods

    default Spliterator<E> spliterator() {

        return Spliterators.spliterator(this, Spliterator.DISTINCT);
}
}

Static Methods

Just like default methods, interfaces can now have static methods also. Static methods cannot be abstract. Even functional interfaces can have static methods.

static Predicate isEqual(Object target) {
     return (null == target)
     ? Objects::isNull
     : object -> target.equals(object);
}

Annotations on Java Types

Prior to Java 8, annotations could be used only on type declarations – classes, methods, variable definitions. With Java 8, use of annotations has been extended for places where annotations are used, for e.g. parameters. This facilitates error detection by type checkers, for e.g., null pointer errors, race conditions, etc. See a sample usage below:

public void compute(@notnull Set info) {...}

Core Libraries

Streams

Although Lambdas are a great way to represent behavior as a parameter, but on their own they don’t go beyond providing a simpler syntax for anonymous inner classes. To extract the power of Lambdas, we need to use them along with extension methods to enhance the Collections APIs as well as adding new classes. In Java 8, this is provided by Streams API, which along with Lambdas, brings a more functional style of programming to Java.

Streams represent a sequence of objects somewhat like the Iterator interface. However, unlike the Iterator, it is not a data structure. It can be infinite (a stream of prime numbers does not need to end). In the same way that it is very easy to write an infinite loop in Java, it’s possible to use an infinite stream and never terminate its use.

Streams can be either sequential or parallel. They start off as one and may be switched to the other using stream.sequential() or stream.parallel(). The actions of a sequential stream occur sequentially on one thread. The actions of a parallel stream may happen simultaneously on multiple threads.

The Stream interface supports the map/filter/reduce pattern and does lazy execution. You can think of a stream as a pipeline with three parts:

  • The source that provides a stream of objects to be processed.
  • Zero or more intermediate operations that take the input stream and generate a new output stream. The output stream may be the same as the input stream, reduced or enlarged in size, or be objects of a different type. It is completely flexible.
  • A terminal operation. This is one that takes an input stream and either produces a result or has some kind of side-effect. Printing a message to the screen is an example where no result is produced, but there is a side effect.

The example shows the different parts of the pipeline.

int sum = transactions.stream(). // Source
filter(t -> t.getBuyer().getCity().equals(“London”)). // Intermediate Operation
mapToInt(Transaction::getPrice). // Intermediate Operation
sum(); // Terminal Operation

The filter and map methods don’t really do any work. They just set up a pipeline of operations and return a new Stream. All work happens when we get to the sum() operation. The filter()/map()/sum() operations are fused into one pass on the data for both sequential and parallel pipelines

Streams can be created by various different ways:

    • From collections and arrays
      • Collection.stream()
      • Collection.parallelStream()
      • Arrays.stream(T array) or Stream.of()
    • Static factories
      • IntStream.range()
      • Files.walk()
    • Custom
      • java.util.Spliterator()

Stream sources manage the following aspects:

      • Access to stream elements
      • Stream sources allow for the decomposition of a stream so that it can be processed in parallel. This is done internally using fork-join framework.
      • Stream characteristics
        • ORDERED – The stream has a defined order (it does not imply that it is sorted, just that the order is fixed). For example a List has a fixed order to the elements of that list. A HashMap does not have a defined order.
        • DISTINCT – No two elements in the stream are equal.
        • SORTED – The stream has an order that has been sorted according to some criteria, e.g. alphabetical, numeric. Any stream that is sorted is also ORDERED.
        • SIZED – The stream is finite, i.e. has a known size.
        • SUBSIZED – If this stream is split for the purposes of parallel processing the size of the child streams will be known. A balanced tree is an example where this is not true. The size of the tree itself is known, but the size of subtrees may not be.
        • NONNULL – There will be no null elements in the stream
        • IMMUTABLE – The stream cannot be structurally modified (no elements can be inserted, replaced or deleted)
        • CONCURRENT – The contents of the stream may be modified concurrently in a thread-safe way

Stream Intermediate Operations – Intermediate operations can affect the characteristics of the stream. For example when you apply a map to a stream it will not change the number of elements (each element in the input stream is mapped to an element in the output stream), but it could affect the DISTINCT or SORTED characteristics. The intermediate operations use lazy evaluation wherever possible.

Stream Terminal Operations – Invoking a terminal operation executes the pipeline. The pipeline gets constructed based on the source and any intermediate operations; processing can then start either sequentially or in parallel. Lazy evaluation is used, so elements are only processed as they are required. Terminal operations can take advantage of pipeline characteristics, for e.g., if a stream is SIZED when the toArray method is used an array can be allocated of the correct size at the start rather than having to be resized as more elements are received.

Optional

Java 8 comes with the Optional interface in the java.util package that helps in avoiding null return values (and thus NullPointerException). Optional interface is like a stream that can only have either one or zero elements. To create an Optional use the of or ofNullable methods (there are no constructors). See the example below:

Optional maybeSales = Optional.of(salesData);
maybeSales = Optional.ofNullable(salesData);

maybeSales.ifPresent(SalesData::printRevenue);

SalesData sales = maybeSales.orElse(new SalesData());

maybeSales.filter(g -> g.lastRead() < 2).ifPresent(SalesData.display());

If salesData in the first line is null a NullPointerException will be thrown immediately. If salesData might be null use ofNullable returns an empty Optional. Rather than using an explicit if (salesData != null)… you can use ifPresent() with a Lambda expression (in this case a method reference). If the Optional is empty nothing will be printed, as the lambda is only used if there is a value.

When we want to use the value of an Optional we can also provide a way of generating a value if one is not present using orElse (returns the parameter if empty), orElseThrow (throws a speficied type of exception if empty) or orElseGet (which uses a Supplier if empty)

Optionals can also be filtered using a Predicate. In the example the filter would only be used if a value is present. Filter returns an Optional that either contains the value of maybeSales (if the Predicate evaluates to true) or is empty (Predicate evaluates to false). ifPresent will then handle this Optional in the same way as before.

Concurrency Updates

There are certain updates related to concurrency.

  • Scalable update variables – The new variables like DoubleAccumulator, DoubleAdder, etc are good for multi-threaded applications where there is contention between threads for access to a variable that accumulates or adds values. They are good for frequent updates, infrequent reads.
  • ConcurrentHashMap updates – Improved scanning support, key computation
  • ForkJoinPool improvements – Completion based design for IO bound applications. Thread that is blocked hands work to thread that is running.

Date and Time APIs

Java 8 introduces a new java.time API that is thread-safe, easier to read and more comprehensive than the previous API. It provides excellent support for the international ISO 8601 time standard that global businesses use and also supports the frequently used Japanese, Minguo, Hijrah, and Thai Buddhist calendars. It has following new classes:

  • Clock – access to the current date and time
  • ZoneId – To represent timezones
  • LocalTime – A time without a time zone
  • LocalDateTime – Immutable date and time

Each of these classes has a specific purpose and has explicitly defined behavior without side effects. The types are immutable to simplify concurrency issues when used in multitasking environments. The API is extensible to add new calendars, units, fields of dates, and times.

Platform

Compact Profiles

With compact profiles, three well-defined API subsets of the Java 8 specification have been introduced. They offer a convergence of the Java ME Connected Device Configuration (CDC) with Java SE 8. With full Java 8 language and API support, developers now have a single specification that will support the Java ME CDC class of devices under the Java SE umbrella. Compact Profiles enable the creation of applications that do not require the entire platform to be deployed and run on small devices with limited storage.

The different profiles are as following:

  • Compact 1 – Smallest subset of packages that supports the Java language. Includes logging and SSL. This is the migration path for people currently using the compact device configuration (CDC). Size is 11 MB
  • Compact 2 – Adds support for XML, JDBC and RMI (specifically JSR 280, JSR 169 and JSR 66). Size is 16 MB
  • Compact 3 – Adds management, naming, more securoty and compiler support. Size is 30 MB.

None of the compact profiles include any UI APIs, they are all headless.

JavaFX 8

JavaFX 8 is now integrated with Java 8 and works well with lambda expressions. It simplifies many kinds of code, such as event handlers, cell value factories, and cell value factories on TableView. Key features are:

  • New Stylesheet – JavaFX 8 includes a new stylesheet, named Modena. This provides a more modern look to JavaFX 8. The older, Caspian, stylesheet can still be used.
  • Improvements To Full Screen Mode – For applications like kiosks and terminals it is now possible to configure special key sequences to exit full screen mode, or even to disable the ability to exit full screen mode altogether.
  • New Controls – JavaFX 8 includes a few new controls. Most notable of these is the date picker, which has been something people have been asking for. A combination table and tree view has also been included.
  • Touch Support – Gestures like swipe, scroll, zoom and rotate are supported as event listeners and touch specific events can be detected as well as multiple touch points.
  • 3D Support – All the features required for 3D support are present. Basic shapes that can be combined to form more complex shapes as well as the ability to construct shapes of arbritary complexity using meshes and trangle meshes.

Virtual Machine

Nashorn JavaScript Engine

Nashorn is a complete rewrite of the JavaScript engine included in the JDK and replaces the old Rhino version. JavaScript applications can be run on their own using the new jjs command, or integrated into Java code using the existing javax.script API. Some of the key features are:

  • Lightweight, high-performance JavaScript engine
  • Use existing javax.script API
  • ECMAScript-262 Edition 5.1 language specification compliance
  • New command-line tool, jjs to run JavaScript
  • Internationalised error messages and documentation

Removal Of The Permanent Generation

Tuning of the permanent generation that holds data used by the VM like Class and Method objects as well as interned strings, etc has been problematic in the past. The removal of the permanent generation will simplify tuning.

Summary

In summary, Java 8 offers a new opportunity for enhanced innovation for Java developers who operate on anything from tiny devices to cloud-based systems. This would increase in developer productivity and application performance through the reduced boilerplate code and increased parallel programming that lambdas offer. Java 8 offers best-in-class diagnostics, with a complete tool chain to continuously collect low-level and detailed runtime information.

You can see Java 8 adds powerful new features to all areas of the core Java platform: language, libraries and VM and client UI enhancements in JavaFX. By bringing the advantages of Java SE to embedded development, developers can transfer their Java skills to fast-evolving new realms in the IoT, enabling Java to support any device of any size. It is an exciting time to be a Java developer.