Optalysys welcomes Software Engineer Omri Kalinsky to the team based in Nevada, USA. Omri has been a professional software engineer since 1998. He has a varied background of applications ranging from medical imaging, military simulation and games to business and productivity software, embedded systems and mobile applications. He has specific expertise in C++ including developing and teaching C++ […]
Using diffraction and Fourier Optics, coupled with our novel designs, we are able to combine matrix multiplication and Optical Fourier transforms into more complex mathematical processes, such as derivative operations. In place of lenses, we also use liquid crystal patterns to focus the light as it travels through the system. This means the tight alignment tolerances that exist through the system are achieved through the dynamic addressing in the software.
For parallel functions, such as the Fourier Transform, each number in the output is the result of a calculation involving every number in the input. Dividing the processing tasks between multiple processor cores for such tasks results in complex data management issues as each processor core must communicate with the others and data must be buffered into local memory. This creates challenging coding problems and only incremental improvements as the resolution is increased.
The Optalysys technology operates truly in parallel, using the natural properties of light and diffraction. Numerical data is entered into the liquid crystal grids (known as SLMs or Spatial Light Modulators) and is encoded into the laser beam as it passes through. The data is then processed together as the beam is focussed or passes through the next optical stage. Increasing the resolution of the data is achieved through adding more pixels to the SLM, but the process time, once the data is addressed, remains the same regardless of the amount of data being entered. The Optalysys approach therefore provides a truly scalable method of producing large calculations of the type used in fluid dynamics modelling (CFD) and correlation pattern recognition.
The simplest benchmark to use is the FLOPs (Floating Point Operations per second) benchmark used for electronic processors, although this is not an ideal comparison as it does not take into account the supporting infrastructure requirements, or the specific process it is being judged for. Roughly speaking, a two-dimensional FFT (Fast Fourier Transform) process takes n^2log(n) operations, where n is the number of data points in the grid. Based on this, our first demonstrator system, which will operate at a frame rate of 20Hz and resolution 500×500 pixels and produce a vorticity plot, will operate at around 40GFLOPs. However, this will be scaled in terms of both frame time, resolution and functionality, to produce solver systems operating at well over the PetaFLOP rates that supercomputers are quoted at – leading to ExaFLOP calculations and beyond.
We are taking each step carefully. The potential of this technology is amazing but it must complement and add to what CFD and computation can already do. Initially it is sensible to focus on spectral CFD methods and some of the linear algebra operations that CFD draws on.
Optical techniques can make all stages of the CFD process more effective. We are far from a pure optical CFD solver but it can help us generate, and learn from data better. We are always thinking about the full CFD process and learning. This leads to big data, again another exciting opportunity.
It is not easy to say how optical technologies will drive the future of CFD. Spectral methods are wide spread in weather analysis, large research organisations and academia. Our focus is to provide optical technologies that add complementary technology to what already exists.
That would be the ultimate project to get involved with and we would love to partner up to do this. Our focus is to prove initially that optical technologies can improve and complement what is already being done digitally.
Absolutely. Optical technology represents point data using light and gives us access to spectral non-local operations instantaneously. This is potentially very important because we might be able to make linear algebra on digital computers more efficient and more accurate. For example, we can do matrix multiplications or convolutions operations at the speed of light.