Java - Parallel Programming is Hard
Ever asked yourself why you cannot utilize all the cores you got?
There is an NPE that comes up once a week and nobody knows why?
Parallel programming is hard, not only in Java, and this talk wants to explain why and increase the overall awareness for hidden problems.
Most people get parallel programming and shared state management wrong. Java offers a wide range of solutions, from a raw approach to (soon) structured concurrency. This all is built on top of a strong definition - the Java Memory Model. On one hand, this leaves room for mistakes, but also allows one to leverage the underlying platform more efficiently.
This talk provides insights into how Java deals with concurrency, why many developer assumptions are wrong, and what optimizations are applied to code to make it run fast while still obeying the limits of the spec. It is all about understanding the hardware, despite using a high-level language. Some examples will also motivate to leave concurrent programming to the experts, leverage libraries instead of trying an implementation, and to apply caution when reviewing code.