Saturday, January 06, 2007

Going Big 2

Another novel theory on how it would all end is by Brett McInnes (my maths lecturer)

This is what he says on his site ( which I have linked at the right in my links sidebar.

* And you thought the Big Crunch was about a Big Smash?

Evidence for a positive cosmological constant keeps piling up. To be more precise, the evidence keeps piling up for some kind of "dark energy" with a *negative* pressure which is close in absolute value to its [positive] density. Of course, to say that the ratio of pressure to density is observationally close to -1 is to say that it might be slightly greater than -1.....or maybe slightly *less* than -1. In fact, there has been a lot of work on the former, which can be modelled by a "quintessence" field, but very little on the latter. Now in fact the "less than -1" cosmologies, called "Phantom Cosmologies" by R Caldwell [who was one of the quintessence pioneers] are very interesting indeed, for several reasons.

Caldwell's *particular* phantom cosmologies have the bizarre property that they come to an end eventually, like the familiar "Big Crunch" cosmologies----but in the phantom world, the end comes not because the Universe collapses to zero size, but rather because the distance between any two given galaxies becomes infinitely large in a finite proper time! [I like to call this the "Big Smash". Caldwell's "Big Rip" would really destroy the universe, but I think that it is more likely that the universe would shatter into disconnected pieces.

Which is why I prefer "smash" to "rip". Of course, I don't know what happens to the pieces---but, since the pieces are themselves infinitely large, I would expect them to shatter too, and that the process would continue forever.] More bizarre still is the fact that in these cosmologies, the density *increases* as the Universe expands, and in fact it diverges as the Universe expands "infinitely fast". That seems a little too extravagant for me, so I have constructed a family of [extremely simple] Phantom cosmologies with no beginning or end---they are in fact asymptotically deSitter in a certain sense. Like deSitter space, they have a disconnected conformal boundary, but *unlike* deSitter space, their Euclidean versions *still* have a disconnected boundary, so they are ideal for exploring the question as to whether the one-to-one bulk/boundary correspondence can really be maintained. In fact, I claim that this disconnectedness of the Euclidean boundary is a signal that, contrary to appearances, these spacetimes are "effectively disconnected". Using recent ideas of Andrew Strominger and Vijay Balasubramanian et al, I argue that, in these spacetimes, time flows away from *both* boundaries. But there is an alternative: maybe it is OK to have a disconnected boundary if there is a quantum-mechanical "entanglement" between the conformal field theories on the two connected components. This would be analogous to Maldacena's rather amazing analysis of the quantum mechanics of asymptotically anti-deSitter black holes. If this version is correct, then the evolution of our Universe is controlled by strictly quantum-mechanical effects in theories which are defined on our *infinite* future and *infinite* past! This ties in with the recent efforts of Turok and Steinhardt et al to abolish the Big Bang and replace it with a Big Bounce. My paper on this is number 48 above. A less technical and more literate version, complete with insults directed at the Null Energy Condition, may be found at This is a version of the talk I gave at the Institut d'Astrophysique de Paris, which is a nice place, in July 2002.


This is the more technical version

Papers by others,

Finally I apologize as I have absolutely no idea what the heck the mathematics are about in the paper and cant give a layman interpretation of the big smash (like I somewhat did in the previous posts). Prolly I hope to read up something somehow and update this post.


Going Big

Swelling wormhole could engulf the universe

IT COULD go rip, it could go crunch. Or, according to the latest theory on how our universe will end, it could be swallowed by a giant wormhole.

In a scenario dubbed the "big trip", so-called phantom energy trickling into a wormhole will cause it to swell up so much that it eventually engulfs the entire universe, says cosmologist Pedro Gonzalez-Diaz at the Institute of Mathematics and Fundamental Physics in Madrid, Spain.

Phantom energy is a form of the dark energy that could be responsible for the puzzling accelerated expansion of the universe. Its defining property is that its energy density increases with time. "Phantom energy is precisely the form of energy one needs to create a wormhole," says Diaz. "Wormholes, therefore, would actually be expected components of the space-time foam if dark energy is actually phantom energy. It is natural to study both at the same time."

Wormholes are theoretical structures connecting two regions of space, or even two parallel universes. In Diaz's scenario, phantom energy enters the wormhole through one end, making it grow. Eventually, the wormhole will grow so large and so quickly that the whole universe will be swallowed by it.

"The paper deals with a remarkable combination," says Christian Armendariz-Picon at Syracuse University, New York. "However, wormholes have two entries. What happens at the second one?"

From issue 2525 of New Scientist magazine, 12 November 2005, page 23

I read this a few years ago and it had me laughing for weeks on ends at the name (SERIOUSLY NOT AT THE CONTENT).


And all the phantom stuff by Pedro Gonzalez-Diaz

Wednesday, January 03, 2007

Feynman Lecture Series

haha, guess what?

i just bought the Feynman lecture series, sigh sigh sigh...
I first thought of writing an enormous article on it which includes the words OMFG, WTF, YAHOO, etc which all somewhat sufficiently express my excitement and joy, but it turns out that its pretty useless to do that (I got bored out lol).

So, heres the pic :D

Happy New Year to everyone! Stay tuned for a post on the speed of gravity.


Monday, December 25, 2006

Back in da Mood, Microscopy Part 2, SHIM,

Hello, deepest apologies for leaving this dear blog such a wasteland during the days of december. Finally back in da mood to blog again. But then, whoever wants to read my dear blog anyway? lol.

Sad to say, I am only gonna publish the second part. There wont be any third part as I have extreme trouble reading the third paper which is on the PALM microscope. So, this is my project for my GEK1509 Introduction to the Nano World module..

I am only gonna get the grade on the 27th. I am quite proud of this paper but due to my lack of talent in taking exams, my two mid terms sucked like the suckers of a big bad octopus sucking on the *** of a virgin who just drowned after leonardo flung her off from the titanic (ok, this means she isnt a virgin but still, u get the picutre right?)

Without further ado (singaporeans love to use this catch phrase, I have no idea why)

And anyone who wants the original manuscript which is way more easier to read than this, email me anytime. Its 5000 words long and contains a heckaluva load more content than this.. lol..

The Scanning Helium Ion Microscope

Advantages of a SHIM over a traditional SEM and FIB

SHIM technology has been widely coveted by many research industries, particularly those who want to “see” more. Nanotechnology, materials science, biological sciences, the semiconductor industry can all have massive benefits from the SHIM and its incredible resolution, high reduced brightness, small spot size and low energy spread. Below, I will discuss on the advantages of the SHIM topic by topic.

1. Superior Resolution

1.1 Wave-particle duality

The SEM, FIB machine, TEM, SHIM and all kinds of “matter” microscopes uses the property of matter waves. Quantum physics predicts the existence of a wave-particle duality of all known particles in this universe. This means that “matter”, fermions, may it be electrons, atoms, ions or “exchange particles between matter”, bosons, such as light can either exhibit wave or particle characteristics depending on how the experimenter chooses to disturb the system. (I use the terms boson and fermion so as not to confuse with the word “particle”).

The de Broglie wavelength of particles is given by

Lambda = h * nu

Where is the wavelength of the particle, mv the momentum (mass * relativistic velocity) and h the planck constant. The equation shows that if all other terms are constant, the wavelength and the mass is inversely proportional. As the mass increases, the matter wavelength would decrease, thus being able to see smaller features. This is explained in detail below.

1.2 Resolving power or resolution

The resolving power or resolution of any optical system is defined as the smallest distance between the centers of two points which can be distinguished as separate in the image. (Fig 2.1)

For both a SEM and the SHIM, resolution can be affected by many things such as the optical system, the sample thickness, the nature of the sample etc. Assuming that all of them are optimum, this boils down to something called the ultimate resolving power. The ultimate resolving power is only affected by aberrations and diffraction of the beam. If we can further erase out the effects of aberrations such as using a monochromatic beam and other lens correctors, the diffraction limited resolution is now given by

where d0 is the smallest distance between two distinguishable points, k is some constant, Cs is the spherical aberration and lambda the wavelength of the beam. Wondering where the diffraction part of the equation is? The spherical aberration can be made to be indefinitely small by decreasing the aperture angle. But there is a point where if the angle is decreased any further, the quality of the image is seriously destroyed due to diffraction. Therefore, there exist an optimum aperture angle where the diffraction and spherical aberration errors are about the same. This shows the dependence of the spherical aberration (one of the last two uncorrected properties) on the diffraction hence the term “diffraction limited resolution”. The above equation is derived for that particular angle.

The equation also shows that Cs has very little effect as it powered to ¼. This shows that as the

de Broglie wavelength decreases, the resolution is increased significantly as the mass of a helium ion is around 7000 times that of the electron. Theoretical calculations (mine) yields that the wavelength of the electron is 7000 times that of the electron and the resolving distance is around three to two orders of magnitude higher for that of a helium beam than that of an electron beam when their velocities are the same. This means that if the electron beam can differentiate an object to a meter, the helium beam can resolve up to a millimeter distance theoretically under optimum conditions.

1.3 Sample interaction and material contrast

Some SEMs have excellent resolution going as far as the angstrom scales. But the SEM lacks high material contrast, which is determined by the interaction of the beam with the sample. High material contrast enables to see critical features of the sample to a very high degree. A helium beam has shorter wavelength as described above and therefore can be focused to a smaller spot size versus a traditional SEM. ALIS corporation has claimed that the spot size could be focused to as small as 1nm. Some SEM microscopes can go as small as even 0.1nm. But aside from the optical performance, the SHIM’s other feature is that it has considerably less volume interaction than the SEM. The SHIM uses an ion beam which has shallower penetration depth than an electron beam. This enables the resolution of the SHIM to be directly proportionate to its spot size which is not always the case with the SEM. Furthermore, the SHIM can produce about 3-9 secondary electrons per ion compared to 1 per electron of the SEM. This also means that the SHIM can image properly with ion currents as low as 1 fA.

2. Staining and sputtering

One of the best features of the SHIM is that it will produce no appreciable staining whatsoever due to the relatively light mass of the helium ions compared to gallium. The largest artifacts with gallium FIB machines is staining or condensation and “sputtering” where the gallium could actually destroy some part of the sample.

In fact, the gallium FIB is used primarily for etching (making shapes by shooting at the sample). At higher beam currents, the FIB machine can be used as an atomic milling machine to perform precision cutting and etching with sub-micron accuracy. This precise milling can also be used for high accuracy precision micromachining from tens of millimeters to tens of nanometers. One of the most used applications for the FIB machine is in sample preparation for the TEM (transmission electron microscope). The SHIM will still be able to do this. The difference is that there will be no “undesired” staining unlike the gallium FIB.

3. Sample preparation and life science applications

In the TEM, tremendous amount of hard work has to be done for sample preparation. This is a very tedious job and requires much time and skill. There is an entirely separate research and industry going on for sample preparation alone. Even in the SEM, insulating materials has to be first sputter coated with a thin layer of gold so that it becomes conductive.

Another property of the SHIM requires no or very little sample preparation and the sample need not be conductive as well. The SHIM backscattered ion mode can produce images of more or less the same quality as the TEM with NO sample preparation at all as shown in the Fig2.3. The SHIM can collect backscattered helium ions like Rutherford Backscattered Spectroscopy (RBS). These backscattered ions can yield information on the material characterization of the sample to a very high detail.

Another issue is of depth of focus. In Biological sciences, large depth of focus is highly valued since high depth of focus can give high quality 3D images of biological samples showing more detail and finer features. ALIS claims that its SHIM has a depth of focus up to five times larger than the SEM.

Structure of the SHIM

The ALIS 'LookingGlass LG-2' Helium Ion Microscope is a noble gas source GFIS FIB machine. As explained in chapter 1, a GFIS machine can provide a high brightness, low energy spread ion beam many times better than a LMIS FIB machine. The exact details of how the ALIS SHIM works has still not been published as ALIS is not a research body but a corporation. However, they have published two papers one of which describes a small part of how the ion source functions.

The ion source

As described extensively in chapter 1 (which was in anticipation of this part), the ion source plays a critical role. The GFIS has actually been researched on for longer than the LMIS but until ALIS came into the picture, nobody could achieve the stability, reliability and properties needed for a suitable helium ion beam that could match the spot size of a SEM.

The exact nature of the source that ALIS uses could not be confirmed as I have mentioned, it is a corporate secret. I have also tried to search for patents and ALIS seems to have submitted none yet. I also wrote to ALIS but no reply came back. From one of their papers[4], the source is a needle type GFIS made of tungsten. An electric field of 3x10^10 V/m can be achieved at the sharpest point of the protrusion. Ionization can take place here just like in field ion microscopy. At this field strength, the gaseous helium atoms ionize near the “ionization discs” (Fig 2.7). They are accelerated away once they are ionized to form a beam and accelerated by the ion column. Increasing the pressure of the helium gas increases the ion current proportionately. The ion current can be regulated from 1fA to 100pA in this simple manner, by adjusting the pressure.

The technology which enables to create this highly stable beam is the carefully shaped tungsten tip. ALIS has not disclosed how this is achieved but stated that “the ability to consistently shape the end form of the emitter with atomic precision is an essential part of the helium ion microscope technology.”

In my opinion, this tip must certainly be a variation of the GFIS supertip mentioned in chapter 1 since ALIS mentioned of a “shaping” of the tip. There has been extensive research in supertip GFIS over the years and some of the problems are described below.

Challenges of the needle supertip ion source

As mentioned in chapter 1, in conventional literature, a needle tip GFIS or a FIM (field ion microscope) needs a low temperature (<77>

It is also known that in previous attempts to make a suitable ion source, the GFIS beam was very unstable. Tiny impurities in the helium gas could affect the beam current seriously. Tiny changes in pressure and temperature also affected the beam quality. It would be very interesting to know how ALIS overcame these problems. In fact, the supertip needle type is even more unstable than that of the normal needle type with coldfinger. If all the problems were actually solved by carefully shaping the tip as ALIS claims, it must have been a very advanced breakthrough for ion source technology.

Potential Applications

If I am not mistaken, this is clearly a serious and major breakthrough in “seeing” things. The applications of this technology are immense. Imagine, a microscope, with the resolution of a TEM, does not require sample preparation, provides higher voltage and material contrast information, inert ion beam with much less volume interaction leaving no artifacts. And to top it all off, it’s still the start whereas electron microscopy and focused ion beam microscopy has been under development for over 40 years.

ALIS plans to use its LG-2 for semiconductor applications first, especially in defect inspection in the wafer fabrication process due to the extreme image contrast it is capable of. Failure analysis is very important as the failed sample could be more “failed” due to heavy sample preparation in the traditional microscopes. The SHIM requires very simple sample preparation cutting time, defects and errors.

Again, due to the high material contrast of the SHIM, measurement applications are also very useful. The SEM has a very high resolution but has low material contrast and therefore it is harder to see the sites where critical features and changes take place.

One of the other features of the SHIM is its backscattered helium ions. These ions provide very accurate information of the atomic number of the substrate material. They provide no whatsoever topographical information. This is similar to Rutherford backscattered spectroscopy which has been in use as an analytical technique in materials sciences research. The backscattered ions can portray rich chemical composition with high contrast.

There are even more applications in life sciences as the non-sputtering SHIM will be able to preserve delicate samples with TEM resolution. Simpler sample preparation will also help in sample preservation. The high material contrast once again can contribute to the distinction of fine features. This is especially useful in biology as very minute and complex features are of the most interest and importance.

Carbon nanotubes are one of the hardest to image as they are made of rolled up graphene sheets which have large hexagonal holes in them. SEM electrons normally penetrate through these holes especially if the nanotubes is single walled. Researchers use TEM and AFM (atomic force microscope) which take ups much time and headache. The SHIM, as ALIS claims (I have no idea how this is plausible), will be able to have TEM resolution without the complicated sample preparation.

There are also potential applications in the photomask production process. Today’s methods use either electron beam lithography or gallium FIB machines to etch the substrate material. The gallium FIB machine, as repeated many times above, “stains” the sample which of course leads to complications in the process to reduce this staining. Electron beams cannot compete with an ion in etching since it is 1/1830 times the mass of a hydrogen atom. With the helium beam, which is relatively inert and lighter than gallium with much higher mass than an electron beam, the semiconductor industry can have enormous benefits with much more accurate and perhaps even up to picometer process photomasks.

The atomic precision shaping technique and technology of the tip as claimed by ALIS might also have other properties which can be applied to other fields of interest. This shaping technique could lead the way to not only helium ion beams but to other noble gas beams such as xenon as well. All sorts of gaseous noble gas ion beams might be applied to a myriad of applications.

Final say

I feel that this is certainly the start of a new era in microscopy. The intricate details of how the ion source needle is shaped should probably be published in the near future. There are incredible prospects in this new field of microscopy which will touch many areas of research and solve many remaining problems due to inadequate resolution and qualities of today’s microscopes. Congratulations to ALIS Corporation for being able to contribute another large future to mankind.

Bibliography and Acknowledgements


  1. V.N. Tondare, Quest for High Brightness, Monochromatic Noble Gas Ion Sources, J. Vac. Sci. Technol. A 23(6), 1498, 2005.
  2. J. Notte*, R. Hill, S. McVey, L. Farkas, R. Percival, B. Ward, An Introduction to Helium Ion Microscopy, Microscopy & Microanalysis Vol.12, Supplement S02, 126, 2006. *ALIS Corporation, Peabody, MA, USA.
  3. J. Notte, Bill Ward, Sample Interaction and Contrast Mechanisms of the Helium Ion Microscope, Scanning Vol.28, 000-000(2006).
  4. Mark Wendman, Wendman’s View on Nanotech, .
  5. Cecil E. Hall, Introduction to Electron Microscopy 2nd edition (McGraw-Hill 1966).
  6. Erwin W. Muller & Tien Tzou Tsong, Field Ion Microscopy; Principles and Applications (Elsevier 1969).
  7. Spence, John C. H., High-Resolution Electron Microscopy 3rd edition, (Oxford University Press 2003).
  8. ALIS Corporation website,

Diagram Ref.

  1. V.N. Tondare, Quest for High Brightness, Monochromatic Noble Gas Ion Sources, J. Vac. Sci. Technol. A 23(6), 1498, 2005.
  2. Mark Wendman, Wendman’s View on Nanotech, .
  3. Cecil E. Hall, Introduction to Electron Microscopy 2nd edition (McGraw-Hill 1966).
  4. ALIS Corporation 2006-

Further References (Indirect Ref., Further Reading on the GFIS supertip, details of the “ten years” of research may be found in these selected papers. I have refused to include them in the term paper due to their extremely technical and obscure nature.)

  1. Th. Miller, A. Knoblauch, S. Kalbitzer: Appl. Phys. A 61, 99 (1995).
  2. Knoblauch, Th. Miller, Ch. Klatt, S. Kalbitzer: Nucl. Instrum. Methods B 139, 20 (1998).
  3. P. R. Schwoebel and G. R. Hanson, J. Vac. Sci. Technol. B 3, 214, 1985.
  4. S. Kalbitzer, Appl. Phys. A: Mater. Sci. Process. 79, 1901, 2004.
  5. G. R. Hanson and B. M. Siegel, J. Vac. Sci. Technol. 16, 1875, 1979.
  6. Th. Miller, A. Knoblauch, S. Kalbitzer: In Proc. Int. Symp. Materials Science Applications of Ion G. R. Beam Techniques, ed. by A.G. Balogh.
  7. K. Jousten, K. Böhringer, R. Börret, S. Kalbitzer: Ultramicroscopy 26, 301 (1988).
  8. G. Walter (Trans Tech Publications Ltd., Uetikon-Zürich, Switzerland 1997) pp. 433–438.
  9. P. Schwoebel and G. Hanson, J. Appl. Phys. 56, 2101, 1984.
  10. Hanson and B. M. Siegel, J. Vac. Sci. Technol. 19, 1176, 1981.
  11. S. Kalbitzer and A. Knoblauch, Appl. Phys. A: Mater. Sci. Process. 78, 269, 2004.


I would like to thank “Claude Bile” of Physics Forums ( for advising me on various matters on this report. You may find our conversation at

Some Additional Images, the ones on the left are taken by an ALIS SHIM while those on the right are taken by traditional scanning electron microscopes

Monday, November 20, 2006

New Scientist 50 Years

I read New Scientist's 50 year anniversary today on the internet. New Scientist is very dear to me as I can say that I "grew up" with it. Congratulations to New Scientist. I hope that I (and this earth) will live until its centennial. Congratulations again.

Friday, November 10, 2006

Trouble With Physics

YAY!!! I finally got my long awaited book. YOU CANT FIND THIS BOOK ANYWHERE IN SINGAPORE!!

Had to wait for 3 weeks to arrive

Here's the pic

Its a book on how physicists have failed in the last half century on string theory and other stuff.

I will write a full report on it soon. but u can read all about it here

scroll down a little to see.

i will write more (despite the fact that i dont have an audience at all haha)

lastly, dont miss this NUS students!!!

Sunday, November 05, 2006

Latest Microscopy Techniques Part 1

This is gonna be a 3 part series on microscopy starting with this one. So its gonna be 3 microscopes.

yep. Kinda a spin off from my Nanoworld project. I had plans to report on 3 different microscopes but turns out that writing on one was long enough (in fact, over the limit). So had to compromise.

But i am gonna write it here, so dont worry haha... And this isnt abstract like the last two posts in case people are still whining on what use fermat's last theorem is.

Microscopy Technique No. 1 : STORM : Stochastic Optical Reconstruction Microscopy.

This is the hottest (as in newest) thing i have in hold of. Published on AUGUST 9!!! Thats like only 3 months ago!!! Yeah, a group at Harvard and Howard Hughes Medical Center led by Xiaowei Zhaung (+ Michael J.Rust, Mark Bates) (go check out her page here) invented a new type of microscopy which will enable us to see biomolecules smaller than ever before LIVE. It was published in the online edition of the journal Nature Methods.

The technique enables resolutions of up to 20nm which can be further increased with further improvements. It can also image biological samples live. Previously, to obtain high resolution pictures of biological samples required it to be death (or it will die anway) in an electron microscope.

Not wanting to use an electron microscope is also fine. There are methods which enables the imaging of up to 1 nanometer accuracy. Traditional standard methods of fluorescence microscopy is bound by the diffraction limit of the wavelength of light. This limit can be broken by methods such as near field scanning optical microscopy, stimulated emission depletion etc. But all of these have their limitations such as taking much time, defects etc. FIONA (Florescence imaging with one nanometer accuracy) literally achieves an accuracy of 1 nanometer but this does not mean 1 nm resolution as the emitters which are too close to each other cannot be distinguished properly anymore.

Now comes the twist in 2005 when Xiaowei Zhuang et. al. discovered a new kind of fluorescence molecules that could be turned on and off at will. Termed fluorophores, these molecules are activated when shone with a particular wavelength of life. So, it naturally comes that if you have different types of these molecules, corresponding to different types of wavelengths, you can randomly choose to shine light on them thus clearing the problem of all of them shining together and losing resolution as they are too close.

Full details are as follows,

Imagine a little pie cut into six pieces with six dots on each of the pieces. The dots are the fluorophores. They are first made to be "dark state" by shining a red laser on them. So all of them are now dark. Now a green laser is shined on "some" parts of the pie (to vamp up this thing, imagine loads of pies and the green laser go around shooting at random). Those "some" parts are now activated. In the next cycle, red laser is shone everywhere as in the first part. The activated fluorophores emit florescence until they are switched off. This time, the detector records a high accuracy position of where that particular fluorophore was. This is done for hundreds of cycles and the combined image is put reconstructed from the multiple imaging cycles. The fluorophores can be used for hundreds of cycles before they become permanently photobleached (useless in a sense). In each cycle, stochastically different sets of the fluorophores are activated which will give the position of them and the image is reconstructed by combining them altogether.

This is a microscopy method which is not dependent on the wavelength of light. It only depends on how many photon counts one can get from one imaging cycle. The group had counts of ~3000 photons per switching cycle. This depends on the dye (fluorophore) used too and the wavelength of light and many other factors. The group used a cyanide dye and stated that other kinds optically switchable types of dyes are possible to use too.

The dye they used was a Cy5 - Cy3 combination. The Cy5 is the primary fluorophore and a Cy3 is a secondary chromophore which helps the Cy5 to change state. The Cy 5 and Cy3 are attached to the opposite strands of a double DNA helix (with loads of other additives and methods which i have no idea about). The Cy3 drastically enhances the switching of the Cy5. Without it, "less than 10% of dark Cy5 molecules per field of view were switched back on when using a green laser intensity of up to 100 Wcm^-2, three orders of magnitude higher than that required for rapid recovery of Cy5 in the presence of Cy3."[2] (I cut and paste from ref. 2 since i cant phrase it any better than that :P)

yep, this ends part one of my "super" microscopy report, it was exciting isnt it? The applications are numerous and but for me (ever the "pure" guy that I am), I dont really care for the applications, the discovery is pretty cool stuff already. haha, and I thoroughly, regretfully, sinfully apologize for any mistakes and misunderstandings in my article as this was tough stuff reading two papers and I could have missed something (or I could have been plain stupid)

The news report:

The papers:

And to see the videos of the optical swtiches:
(Just watch this and its damm illluminating (lame joke))

[1]M. J. Rust, M. Bates, X. Zhuang, "Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)", Nature Methods 3, 793-795 (2006)

[2]M. Bates, T. R. Blosser, X. Zhuang, "Short-range spectroscopic ruler based on a single-molecule optical switch", Phys. Rev. Lett. 94, 108101 (2005)

Sunday, October 29, 2006

Travelling Back in Time; A New Twist (Again, duhhh...)

Yes yes, another one of time travel, AGAIN! But this one is somewhat different cuz its worse than time travel.

Imagine, that, what you are doing right now is actually affecting your past. What you are doing now is actually deciding whether this universe is plausible or not 13 billion years ago. What done is done... or is it?

I read this article from New Scientist and its actually very much another dimension to fry your brain cells about. This also adds a new twist to the properties of teleporation and sets the retrocasuality of the universe at its feet. I dont know whether the experiments have suceeded or not, but still, its all good reading to ramify about.

In Classical Newtonian mechanics, all equations of kinematics and were "time symmetric". This means that processes happening towards the future work the same as it moving in the reverse direction (in time). Like a ball going with constant acceleration, we can know its past, when it started from 0 velocity, easily by just using the equations. We can also know its future, if eg the acceleration was negative and the ball was gonna stop somewhere in the future.

General Relativity arrived and now, the 4 dimensional notion of the fabric of space-time said that all past, present and future exist seamlessly in an unchanging "block" universe. This means many things if you can hold this view, as in the future and the past are no different and causes can come both from the future and the past (note, in common sense view, causes only come from the past, not from the future.)

Quantum mechanics require even less of timing and "real temporal order" generally is not important at all. In fact, Richard Feynman led to the idea the positrons (anti particles of electrons) are actually the normal electrons going back in time. He extended this idea with John Wheeler and produced a Quantum Electrodynamical model of waves going back and forth in time.

But even though all these ideas were developed, experimental proof of reverse casuality was not found.

This new experiment being made to test this reverse casuality extends the idea of the double slit experiment. A photon can be made to be detected either as a wave or as a particle depending on how you measure it. In extreme layman terms, if you place the detector nearer the source it will register as a particle while if you place it further from the source it will be registered as a wave.

The experiment also uses the property of quantum entanglement. Two photons can be made to be entangled by passing them through a special crystal as shown in the picture. These properties of the entangled photons are now linked regardless of time or space. If one has positive spin, the other will also exhibit positive spin. You can take away the two entangled particles to either far side of the universe and if one of the photon changes from + to -, the other will change in the same instant. This was part of an experiment in Austria where physicists "teleported" a photon across a river.

The apparatus for the experiment(the retrocasuality one), first produces two entangled photons. One of them is sent straight to a detector at a fixed distance, the other is sent through 10 km of coiled fiber optics cable. The second photon emerges from the cable facing a movable detector. The second photon arrives milliseconds later than the first photon since it had to transverse an extra distance.

The fun starts when we move the movable detector. If we were to make it near the end of the cable, it would register as a particle or if we move it further away, it would register as a wave. So, now what happens to the first photon? Remember that they were entangled? And since the second photon actually arrives in the so called "future", this means that the first photon which was in the so called "past" was effected by the future.

This also leads to the question that how the information arrive at the first photon. Since we are using photons, they are of course moving at the speed of light. The information must have traveled faster than light so that the first photon "know" about the status of the second one.

If the experiment does show evidence for retrocausation, it would open the door to some troubling paradoxes. If you could see the effects of your choice before you make it, could you then make the opposite choice and subvert the laws of nature? Some researchers have suggested retrocausality can only occur in limited circumstances in which not enough information is available for you to contradict the results of an experiment.

This doesnt stop here. Further extension of the idea leads to the somewhat solutions of "Why are we here?" , "Why is the laws of physics in this universe this way?" type of questions. Room for retrocausality, if it exists, means that the presence of conscious observers later in history could exert an influence on the first moments of the universe, shaping the laws of physics to be favourable for life.

If anyone wants to read the full article, you may email me and I can send you an edited copy.

EDIT:: I pulled down the pic cuz of copyright laws that might hound me. But if u really really reaallly want the image, u know what to do...

Fermat's Last Theorem

I have this extremely close affectionate connection to Fermat's Last Theorem.
Particularly becuase of the book by Simon Singh, "Fermat's Last Theorem" (

All the books by Simon Singh are incredible and everyone who would like to get an introduction to how mathematics work should clearly get "Fermat's Last Theorem". He also got other books so check them out, they are all darn cool.

This is not gonna be a book review although it can be thought so too.

Fermat's Last Theorem was the book that entirely changed my view on what mathematics was, what a mathematical proof was. I dare also say that this book also changed my life in some way. I only developed serious interest in pure maths after this book. Deepest appreciations to Simon Singh. To my local library (British Council Rangoon) too for having that book.

The books starts with the history of how Fermat left a puzzle in one of the margins of his copy of Diophantus' Arithmetica in the 1600's. Fermat's theorem is simply that

If an integer n is greater than 2, then an + bn = cn has no solutions in non-zero integers a, b, and c.

or in latin in the original writing

"Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et generaliter nullam in infinitum ultra quadratum potestatem in duos eiusdem nominis fas est dividere cuius rei demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet."

which means

(It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second into two like powers. I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.)

Then he wrote something like

"I have a truly marvelous proof of this proposition which this margin is too narrow to contain."

This of course, irked many a mathematician, even the great Gauss who turned to sour grapes by saying

"I can write a multitude of propositions like that anytime which one could neither prove nor disprove." Yes yes, sour grapes, even gods cannot escape from sour grapes once in a while.

This dear proposition escaped from mathematicians for 4 centuries until the 1990's. The proof for n=4 was proved by Fermat himself in another place in his markings using the method of infinite descent. Using a similar method, Euler proved for n=3. Various scientist in the centuries that passed continued to prove it until quite a lot of general categories (eg all regular prime numbers). But no proof for till infinity was found.

In the 1960's and beyond, several connections were found relating elliptic curves and modular forms. Taniyama and Shimura conjectured that "All elliptic curves are modular".

Ken Ribet proved in 1986 that every counterexample an + bn = cn to Fermat's Last Theorem would yield an elliptic curve defined as y2 = x(xan)(x + bn) which would not be modular and therefore provide a counterexample to the Taniyama–Shimura conjecture. This is the epsilon conjecture and it means

1. Assume Fermat's Last Theorem is not true

2. This means that there exist certain solutions which could be related to an elliptic curve.

3. This particular elliptic curve is never modular. (The epsilon conjecture proved by Ribet)

4. Therefore the Taniyama-Shimura Conjecture is false.

If you run the logic backwards,

1. Assume the TSC is true

2. Then there cannot exist a particular elliptic curve which is not modular

3. Since such a curve could not exist, no solutions could exist for such a curve at all in the first place

4. There are no solutions for FLT, thereby FLT is true.

Which was what Andrew Wiles did. He locked himself up for 7 freakin years and proved the theorem in 1993 spewing out numerous new mathematics along the way. But the proof was flawed. The flaw was corrected in 1995 by an equally ingenious way. Hence FLT was finally proved after 300 years...

Haha, ok, that was a boring post. But this book has a lot more to it of course. What I loved best was that it was made for people who have absolutely no knowledge in mathematics and brings them along on a journey of discovery from the time of Euclid and Aristotle to the final proof of Andrew Wiles.

It offers tremendous insight onto what a mathematical proof is and how mathematical ideas developed throughout the centuries. It also offers a lot of historical aspects and figures (people, not numbers). I read the book entirely like a story but this book taught me more than any technical mathematical textbook I have ever held to this very day.

Simon Singh has several other books, you can check them out at and you can also check out wiki for more on FLT and number theory. the portal also has incredible content there (dang, my SAT vocab isnt working, incredible is the only word i know).

As a last conclusion, I will leave you the first part of the first page of the proof by Andrew Wiles below:

An elliptic curve over Q is said to be modular if it has a finite covering by
a modular curve of the form Xo(N). Any such elliptic curve has the property
that its Hasse-Weil zeta function has an analytic continuation and satisfies a
functional equation of the standard type. If an elliptic curve over Q with a
given j-invariant.............

Modular Elliptic Curves and Fermat's Last Theorem
Andrew Wiles
The Annals of Mathematics, 2nd Ser, Vol 141,
No. 3 (May 1995), pp 443-551

Prof Wu Tai Tsun and LHC

Last month I went to this lecture by Prof Wu Tai Tsun, part of a LKY distinguished lecturer series. It was on the LHC and the higgs boson. Simply incredible is the correct expression for this event.
Prof Wu Tai Tsun is a Gordon McKay Professor of Applied Physics at Harvard. It was the fact that he actually was one of the people who built the standard model of this century that made this so exciting.

As usual, he talked about some history on particle physics and then on to the LHC at CERN. Some timeline on it and eventually, the hot topic, the higgs boson. It was quite the normal lecture on one of another great anticipation moments for theoretical and particle physicists.

Then of course, it was the Question and Answer section afterwards that was gonna be hellaluva hot for the day, or so I thought. The audience were complete dopes. I mean, come on, a heck of a scientist came down to talk about "THE HIGGS BOSON AND LHC" and you guys ask heck of a questions on how a particle accelerator works? And there was this particularly annoying guy who kept asking the dear prof who clearly did not understand singlish, his homework questions like what happens when a proton and anti proton collides and the worst of all was the ones from A Brief History of Time, like how come there are no 4 generations of matter? etc.

I asked a question myself too, I dont think it was that bad tho. I simply asked what happens if we dont find the higgs boson and whether this could affect the standard model seriously or maybe there are other theories to make up for the lack of the existence of the higss boson.

The answer was that its ok even if the higgs didnt exists since it is still a debate between various scientists whether it should exist at all although most have come to accept it. He went on to explain that serious mistakes had been made in the past like neutrinos were thought to have no mass but later turned out that they have a little mass after all. And there also other theories which need not include a Higgs Boson at all. The last one I still remember is the part on most particle accelerators ever built had seldom found the particle they were set out to find but found other interesting results instead. So its still ok afterall i guess.. Lets hope there are no suicides if the Higgs Boson isnt found at all.

Well, ok, thanks to Prof Wu for such an illuminating lecture. Thanks to the organizing parties too like LKY and NUS etc..