My experience with aiida

Me and aiida go way back, but sadly, I never got the opportunity to use it extensively until now. Lately, as I got more exposure, I feel the same as the first time I started experimenting with it: Lost!

I decided to do this post, not to do criticism – if anything, I am the last person to do it, as I have one or two repos that I need to find the time to finish documenting. I just like the idea and its purpose and would like to talk about it as a user, so that you don’t feel alone. Since it’s changing versions fast, it’s quite possible the issues I point out here will be solved soon.

If someone doesn’t know of what aiida is, it is an automation software than lets you run multiple simulations, read the outputs, adjust, re-run and basically it is like a little robot that does a lot of the boring work for you, while it’s fairly updated on new methods and algorithms (check this example).

Continue reading

Extracting pinch-off and threshold voltages in quantum transistors

Many times I needed to extract threshold voltages from experimental results. I remember in my PhD days, this was quite a debate, and we could generally agree there is no optimum way of doing it. The problem mostly lied in that you are not exactly sure in which part of the plot the current starts to flow, or the channel is depleted. In room temperature transistors, this was mainly due to the intermediate region of thermal population of the bands. In quantum transistors, we are playing in the low temperature field and most of the time dopants are thought to be frozen.

What is really certain is that all voltages that you compare against will need to be extracted using the same method.

Since what we are interested in is changes in the curvature of the plot, we expect to play around with derivatives a lot. But for this, there is the extra problem that experimental results can be very noisy. Once you take the derivative, you introduce further noise, that goes even worse in the second derivative (see figure below).

Characteristics of a transistor with 2 pinch-off positions in the classical regime. First derivative (blue plot) shows pronounced amount of noise.
Continue reading

On the universality of Landau theories

There is a lot of buzz going on about topological materials and the quantum Hall effect these days that mark the 40 years since its discovery. If you know a few things about topological materials, you will definitely know that the theory behind them is about a macroscopic mechanism that originates from microscopic (quantum) effects.

There is a similar class of problems, maybe less famous at the present time, namely that of polarization in materials, which has its own counterpart “The modern theory of polarization” that was developed in the ’90s. What these two have in common is exactly the emergence of macroscopic phenomena from microscopic ones.

Continue reading

Vector potential with an angle to the periodic direction

One of the most difficult aspects in Tight Binding models is the incorporation of the magnetic field. And that is because a lot of the things that exist in simple analytical expressions in quantum mechanics, change when we are talking about a Tight Binding model, and especially one derived from First Principles using Density Functional Theory as that discussed here.

One of the problems that could emerge is that there exists an angle \theta between the periodic direction and the direction the magnetic field is applied. This differs from the case of the Peierls phase, defined as,

Continue reading

Interview on ΒΗΜΑ Science

Our work at CEA in collaboration with CNRS has been published on the printed version of the Greek newspaper ΒΗΜΑ. I am happy that the project I am currently working on and the way it connects to the work I was doing at Aristotle University of Thessaloniki have gained publicity.

You can find it on-line here (requires subscription):

https://www.tovima.gr/printed_post/i-kvantiki-yperoxilfsta-skaria/?fbclid=IwAR1NnkS8YSNVDYs7OCHQnAqsIAOJZHKqVZsb6GhZgxGi2HrCfWMbBDqz4v8

Marie Curie fellowship

I am happy to announce that today was the first day of my journey as a Marie Skłodowska Curie fellow, funded by the European Union. My project is titled “A predicting platform for designing semiconductor quantum devices” (PRESQUE) and I will be working at CEA Grenoble for this, under the supervision of Xavier Waintal.

This project has the ambitious goal of making computational predictions for semiconductor-based quantum transistor devices a reality, since for this to happen, a lot of different physics need to be incorporated into one model. Luckily, the group of Dr. Waintal has done a great work in developing the software Kwant, including different modules that can work to this aim.

If we manage to have one such computational model, then this can signify that we are able to accelerate the procedure of searching for the optimum device configuration, and ultimately getting closer to the quest of the holy grail, that is, quantum advantage.

It will be a lot of work, and lots of interesting problems to solve, so there is bound to be some corollary technical intricacies, which you can follow in this blog, along with other updates on the project.

 

Teaching in the age of doing

Ever since I was really young I hated classrooms. Throughout my life, I’ve spent a great portion of my time thinking about how lecturing should be done and always had crazy, innovative ideas. But I was also afraid that they would not be effective. So, then I started thinking how a society would be like that supports this type of teaching. But then I was afraid that it’s not worth it, since I would not be able to change the whole of it!

Fast forward 15 years later and boom! I’m the lecturer! And after spending some time observing people that I admire for their patience (how they talk, their ideas, how they teach) I realise that there is absolutely no reason to be scared! The ‘preparation’ was really long and the ideas are many, but now it is time to act on them, and I can’t help but being excited for what is coming!

So there, start small. Like this:

Continue reading

More Beyond Moore anyone?

Recently, I saw a post on the internets that stated companies like Intel should focus on creating new technologies and not try to optimize the already existing ones. Specifically, they should “try to find something amazing that could replace SOI”.

This point of view is obviously simplistic, but it made me think about the wrong perceptions we have when newbies to the technologies out there.

First of all, we need to be able to talk in terms, and the ITRS does a pretty good job helping us on that. So, there’s three categories at the moment: More Moore, More than Moore and Beyond Moore. More Moore means scaling of the current CMOS technology by incorporating different materials or doing process adjustments like high-k dielectrics and SOI, or even different FET structures (FinFETs) etc. More than Moore means the incorporation of a new technology into the standard CMOS, like MEMs. And finally, beyond Moore involves a leap further from conventional CMOS and not just integration into it. Examples in this case are only limited by your imagination and can be carbon electronics, spintronics, memristors, you name it.

Therefore, SOI is not really a different technology at all. Despite it being effective in providing memory devices for the radiation hardening community,  the real goal of SOI was to solve some issues that involved the scaling of the CMOS process itself and not invent something completely new. And this is the whole point of Beyond CMOS: To overcome the limits of the latter. Whatever new and exciting you find in the More Moore spectrum, the imminent reach of the physical limits means that you will soon have to abandon it.

But on to the real issue here: What research should the industry do? Well, I don’t know who gave them these crazy ideas that companies operate for the good of humanity, but the real answer is “whatever will bring them the highest income with the least expenses”. So, apart from the obvious minimization of the risks and a reluctancy to undertake a project that might not be successful, it also means that they operate on a “milk the cow” basis. And you do that by staying close to the technology that has the better chances of involving and being adopted in the future.

Intel, for example, has been reducing the size of its transistors using process adjustments every two years according to [1]:

cadence

In this white paper, it also mentions III-V semiconductors as probably being a project that they undertook. But they also go a little beyond that by mentioning spintronics and quantum computing, as something whose feasibility they are estimating, or at least that’s what I make out of it. Visiting their research website you can really tell where this company is oriented to: standard consumer electronics, servers etc. Fast speed, low power consumption. However, IBM’s website has a different tone. It is full of different interest areas. Spintronics, carbon nanotubes, optical chipsets etc.

Beyond Moore scaling of electronics involves intense research that spans over different science areas (materials, chemistry etc.). To reach the point where a company is interested in a specific new idea, my view is that not only it needs to be something that has a high probability of actually working (and working better than CMOS in the aspects of interest like speed and power consumption), but it also needs to be scalable so that the cow can be milked accordingly.

FPGAs for space

The FPGAs are considered the future to space electronics, as there is nothing more useful, in terms of robustness, than re-configurable devices. But what about their resistance to space radiation?

Commercial rad-hard ICs today are getting rarer and most public efforts on the area are somewhat deprecated. It proved that demand was lower that the supply, and the way I see it, it is starting to get very complex with all these new technologies popping up, in the race of overcoming Moore’s predictive restrictions.

One of the biggest enemies of the industry is raised costs related to changing the process technology, which led many developers turn to Radiation Hardening By Design. RHBD, on the other hand, has some known inherent disadvantages which include higher power consumption, area penalty and a design headache. This article also describes how RHBD does not answer the problems of lowering the final costs.

The cheapest method seems to be the combination of rad-hard (or not) FPGAs and libraries that automatically make the design hardened. One such library is Design Against Radiation Effects (DARE) which implements techniques of redundancy and voting. These schemes are responsible for increasing the area penalty of the design, but remove a significant amount of burden from the designers.

The question that remains now is not only whether hardening by design techniques are the future to radiation hardening, but whether FPGAs are the future to space electronics. Without being an expert on the subject, I’d say ‘no’ to both. I know it might not qualify as a proper scientific explanation but just as “secure” programming languages (C++, Java) did not exclude the use of C in some specific cases, FPGAs simply cannot replace all electronics. And what’s more, you cannot neglect the advantages of using FPGAs that are hardened using a “by process” method. Generally, examining hardening at the transistor level is the only way to make us use the full potential of the extremely small, lightweight and power-saving electronics that we all hope will emerge.