DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
76% Positive
Analyzed from 4771 words in the discussion.
Trending Topics
#read#memory#paper#storage#write#years#tier#laser#research#device

Discussion (150 Comments)Read Original on HackerNews
Demonstrating this stuff is possible isn't the hard part, it seems. Productionizing it is. You have to have exceedingly fast read and write speeds: who cares if it can store an exabyte if it takes all month to read it, or if you produce data faster than you can write it? It has to be durable under adverse conditions. It has to be practical to manufacture the medium and the drives. You probably don't want to have to need a separate device to read and a device to write. By the time most of these problems are worked out, most of these technologies aren't a whole lot better than existing tech.
Stick this on the "Wouldn't it be nice if graphene..." pile.
Red LEDs were invented / discovered in 1920s, became commercially successful as indicators in 1960s. Optical fibers were invented in 1920s or so, became a commercial success in 1980s.
Certain things just take time. Do not dismiss a good physical effect, they are much more rare than so-called good ideas.
Do you just think Google hates money, or does this only work for hover cars
“Feasible” is doing some heavy lifting there. The whole point of the comment you replied to is that it can take a long time for some new physical technique to become commercially feasible.
In short, if a tech takes 40 years to be commercialised it would have been invented some time in the 80s.
To be fair, if I'm reading an exabyte in a month, my hardware's pushing >3 Tbps, which I'd be very happy with.
Or maybe RAEND
Massive storage that takes a month to fully read is acceptable in a wide variety of use cases. If it's cheaper than hard drives it'll get a huge amount of users.
But I 100% agree with your main point about possibility vs productionisation.
I'm not familiar enough with the space to know how this idea rates compared to alternative options at similar levels of development: the density is obviously extreme (but probably not the biggest advantage), and it makes sense to me that the underlying physics could work robustly, but the practicalities of how you read and write seem pretty difficult (and I think the paper kind of glosses over this: read caching and defect mapping could be trickier than it implied. Accessing the tape from both sides also seems like it will make the engineering more difficult).
I do not think that any new memory device principles have been invented after WWII. Already by 1940, the inventor of DRAM, John Vincent Atanasoff, had enumerated almost all principles that can be used to make a memory device.
The first DRAM of Atanasoff was made with discrete capacitors, then 5-years later von Neumann proposed to use iconoscope cathode-ray tubes instead, which were used for a few years, before being replaced by magnetic core memories. The Intel company was formed for the commercialization of the first (1-kbit) DRAM integrated circuit made with MOS transistors.
The memory described in TFA is in principle equivalent with a memory made with mechanical toggle switches or latching relays with mechanical latching, where the 2 stable states are maintained by elastic forces and you can toggle the state if you apply a force great enough on the switch.
Reducing a mechanical bistable device to the size of a few atoms reaches the possible limit of memory density. As described in the parent article, this device should be able to store information safely and it should be able to switch is state quickly.
The difficulties are not in the memory cell itself, but in how to enable fast and accurate reading and writing. While the memory cell itself may have the minimum size permitted by the atomic structure, there is no way to miniaturize to the same extent any kind of reading and writing interfaces, so that they could be incorporated in the memory cell, like in an SRAM cell.
Therefore the only solution that can preserve the high cell density is to have a read/write head that is shared by a great number of cells, i.e. which must be moved in order to access different cells.
So the memory, at least within some block, must have mechanical access, so it must be implemented as a tape or a disc. Multiple heads could be used to increase the read/write speed, like also for magnetic memories.
So I do not think that there is much to criticize in this paper, it makes sense and it identifies a new material that is suitable for implementing a known kind of memory cell at an atomic scale, even if it is unlikely that a practical memory based on this concept will become possible any time soon.
Microsoft has worked for many years on their glass memory devices, which have much more important advantages, and they are still far from being able to sell such devices, mainly due to the cost of the required lasers, for which there is a chicken-and-egg problem, they are very expensive because they are produced in very small quantities and they cannot be incorporated in a device intended for mass production, because they are too expensive.
I don’t think this would bother the average enterprise in the least. We used to have entire rooms dedicated to tape libraries that housed dozens of tape drives and thousands of tapes each.
The read and write speed are absolutely critical but having to utilize multiple devices isn’t anything new at all.
I more meant we no longer have room sized libraries unless the cloud providers have commissioned something custom and not available to the public. I believe the last installed powderhorn I’m aware of was decommissioned almost a decade ago now.
https://www.iscgroupllc.com/products/storagetek/storagetek-p...
Are you only thinking about home consumer applications?
Flash, on the other hand, had made steady incremental progress from the time it was first described until it was fully commercialized.
Of course, wouldn't you expect that for a fairly mature technology that you'd get tons of false starts from competing tech before eventually getting one breakthrough that completely changed everything? I mean, you could have written a comment that was perfectly analogous to your paragraph above about how AI and neural networks never really amounted to much for about 50-60 years until, all of the sudden, they did (and even if you think AI may currently be overhyped, it's undeniable that in the past 5 years that AI has had an effect on society probably much greater than all the previous history of AI put together).
I prefer to read this academic paper as "Oh, this is a really interesting approach, I wonder what its limitations are" vs. interpreting at as a "this new storage tech will change the world!!!" announcement. I feel like the first approach leads to generally more curiosity, while the second just leads to cynicism and jadedness.
From that, you might be able to draw useful conclusions. Well...you'd also need correction factors for how profitable the hype itself was, over time, in the various scientific & technical fields.
The business model would be selling db access to VC's, R&D managers, and other folks making decisions about real money.
(I'm also unclear how the bit is supposed to actually flip under the applied electric charge without the fluorine and carbon having to pass through each other.)
Technical note, because it's jargon:
"Real" means position = A * sin(w * t)
"Imaginary" means position = A * expt(w * t)
(because expt(w * i * t) = cos(w * t) + i * sin(w * t))
If you calculate in a computer an ammonia molecule with all the atom is a plane z = 0 (instead of the usual piramidal shape), then the N in the center is in an inestable equilibrium and the N does not make small vibrations like z = expt(w * t).
It makes a big "imaginary" vibration like z = expt(w * t) that is exponential for a short time while z is almost 0, and then the approximations don't apply and it reach the z of the usual shape at equilibrium.
The AFM mechanism described as “tier 1” (very strong LLMism, btw) is somewhat optimistic but realistic. The fields needed are large compared to usual values in solid state devices, but I’d guess achievable with an AFM. But “tier 2” is vague and completely speculative. Some random things I noted: - handwaving that (not exact quote) “the read controller is cached. No need to read the same bit twice”. Cached with what?? If this miraculous technology can achieve 25 PB/s, what can possibly hope to cache it? More generally, it’s a strange thing to point out. - some magic and completely handwaved MEMS array that converts an 8um spot size laser beam into atomic-resolution 2D addressing? In my opinion this is the biggest sin of the manuscript. What I understood to be depicted is just fundamentally physically impossible. - a general misunderstanding of integrated electronics, and dishonest benchmarking, comparing real memory technologies being sold at scale right now, vs theoretical physical bounds on an untested idea. Also no mention of existing magnetic tape as far as I can tell. - constantly pulling out specific numbers or estimates with no citation and insufficient justification. Too many examples to even count.
I’m sorry for the harsh language, I wouldn’t use it for a usual review. But in my opinion this needs a very heavy toning down and complete rewrite, and is unfit for a proper review. Final remark: electronics is, and will always fundamentally be, intrinsically denser than optics. Some techniques “described” here, if they were possible, would have been applied to existing optical tech (i.e. phase change materials in blue-ray).
> Once a region of tape has been read, the controller stores the result. Subsequent operations reference the cache rather than re-interrogating the physical medium. Re-reading a known bit is unnecessary; the controller already holds its state
However, earlier, the paper claims:
> The transformer architectures underpin- ning modern large language models are bandwidth-limited, not compute-limited [1–3]. The energy consumed moving data between DRAM, NAND flash, and processor cache already exceeds the energy consumed by arithmetic in datacenter AI accelerators [2]. This is not an optimization problem. It is a materials problem [emphasis mine].
as part of a longer rant about the AI "memory wall" in the very first section. If we open with a long spiel about how memory is expensive in material cost and energy cost and this material is a solution for that then what are we caching the read in? On that note, what kind of computer engineer thinks about cache on the order of individual bits on a medium?
And, as you point out, 25 PB/s is a lot. Around 1000x that of a typical on-die SRAM cache, I think.
A while later, the author speaks of using atomic force microscopy to read the data back. The size of AFM scans are, in practice, as I understand, along the order of square micrometers. I think this whole paper is an AI-driven, as you put it, 'fever dream', enabling an author to put forth 60 pages of sciencey claims and sciencey math without -- as far as I can tell -- any concrete and novel scientific result of any kind. AI-driven reality warps are not new; the difference is nowdays AIs are good enough at sounding smart to get past the barriers of a typical smart person who might want to be fooled or make a show of being open-minded. Later on, the author proposes using "shaped femtosecond IR pulses" -- without further elaboration -- to address single atoms! IR wavelengths are on the order of a micrometer at minimum!
Given the amount of AI writing involved, I'm pretty sure that you actually meant "this paper is inane". Or maybe both!
The caching comment refers to the Tier 1 controller holding a bitmap of bits it has already scanned — standard practice in any scanning probe system. It's not competing with the storage medium for capacity.
Tier 2 is explicitly labeled speculative. The paper's validation target is Tier 1: one C-AFM scan, one voltage pulse, existing equipment.
The core contribution is not the architecture — it's the physics: a verified transition state for C-F pyramidal inversion at 4.6 eV (B3LYP) and 4.8 eV (CCSD(T)), one imaginary frequency, barrier below bond dissociation. That's standard computational chemistry, not handwaving. The architecture sections are forward-looking by design.
The fluorine passes between two carbon neighbors through a C-C gap of 2.64 Å at the transition state — not through any atom. This is pyramidal inversion, the same mechanism as ammonia, but with a 4.6 eV barrier instead of 0.25 eV.
Magnetic tape comparison is in Table 2.
How is this lost on people? Everything that contains the slightest hint of "AI slop" is instantly panned anywhere it appears, and yet people such as Ilia Toli appear to be entirely oblivious to this.
It's tragic. There is at least a non-zero chance that this work is a world changing breakthrough. It's clear, based on his engagement with comments here, that he at least believes this. And yet the first thing the guy does with it is debase it all using a clanker.
It boggles the mind.
We're seeing this throughout academe, in courts with both lawyers and judges, and among lawmakers and journalists. Several times a week one or another of these makes another headline for misapplying "AI". It seems that the work for which we are all expected to have the highest regard is coming from people that are completely witless; both unaware of how transparent this is and unaware of the consequences.
You have to be deeply ensconced inside an impenetrable bubble to do that to yourself.
Does that mean a scanning tunneling microscope is the I/O mechanism? That's been demoed for atom-level storage in the past. But it's too slow for use.
And what’s the reason for going solo vs a research university, where I assume this type of research could be significantly sped up?
Edit: https://www.mathgenealogy.org/id.php?id=61429 It looks quite unrelated
https://www.researchgate.net/publication/258423577_Data_Stor...
Clearly they have been working on this for over a decade.
I had thought for a while about a way to store data that makes use of an idea that I had for sub-diffraction limited imaging inspired by STED microscopy.
First an overview of STED. You have a "donut" shaped laser (or toroidal laser) that is fired on a sample. This laser has an inner hole that is below the diffraction limit. This laser is used to deplete the ability of the sample to fluoresce, and then immediately after a second laser is shone on the same spot. The parts of the sample depleted by the donut laser don't fluoresce and so you only see the donut hole fluoresce. This allows you to image below the diffraction limit.
My idea was to apply this along with a layer in the material that exhibits sum frequency generation (SFG). The idea is that you can shine the donut laser with frequency A and a gaussian laser with frequency B at the same spot. When they interact in the SFG material you get some third frequency C as a result of SFG. Then, below that material would be a material that doesn't transmit frequencies C and A.
Then what you'd be left with after the light shines through those two layers is some amount of light at frequency B. The brightness inside the hole and outside of the hole would depend on how much of the light from frequency B converts into frequency C. Sum frequency generation is a very inefficient process, with only some tiny portion of the light participating, but my thinking is that if laser B is significantly less bright than laser A, then what will happen is that most of the light from laser B will participate in sum frequency generation where it mixes with laser A, and that you'll be left with only a tiny bit of laser A outside of the hole, so that you get a nice contrast ratio for the light at frequency A between the hole and the surroundings that then allow you to image whatever is below these layers below the diffraction limit.
In my idea the final layer is some kind of optical storage medium that can be be read/written by the laser below the diffraction limit. Obviously aiming this would be hard :) My idea was that it would be some kind of spinning disk, but I never really got to that point.
You're comparing to current memory technologies but there are also some optical technologies like AIE-DDPR which presumably is (a lot?) less dense but has layers (I noticed you're also discussing a volumetric implementation), would devices based on your technology be simpler/faster? (I guess optical disks don't intend to replace high speed memory). What about access times?
I'm sure you could take this material and write a couple papers out of it, but right now this is a 60 page word document with commentary on a variety of topics from memory market economics to quantum computing.
It's full of self-congratulatory language like "The transition is not an incremental improvement within the existing paradigm; it obsoletes the paradigm and the infrastructure built around it". Alright, I'm happy to believe that this work is important. But this is not the neutral tone of a scientific article, it reads like ad copy for a new technology.
I'm sure there's interesting physics in there, but it needs a serious editing effort before it could be taken seriously by a journal.
Smells like laziness to me.
fluorographane -> Fluorographene
Can't find a single page about fluorographane
https://en.wikipedia.org/w/index.php?search=fluorographane&t...
But this
https://en.wikipedia.org/wiki/Fluorographene
https://nowigetit.us/pages/d7f94fd0-e608-47f9-8805-429898105...
> A scanning-probe prototype already constitutes a functional non-volatile memory device with areal density exceeding all existing technologies by more than five orders of magnitude.
Are we supposed to read all these stories as lies?
Now it doesn’t say that this is easy to produce, but if those claims are true, it doesn’t really matter if it is very expensive.
It doesn’t say either if the stuff can withstand live conditions.
It’s annoying not to be able to trust whether solutions like these are viable or not.
You should not expect that even the best of ideas at this stage are going to turn into products on any reasonable timescale, this is at a super early level of development and there are probable more things that can go wrong than you are imagining. But the paper shows there has been a good amount of effort at this stage to evaluate the robustness of the storage: the whole reason for this particular arrangement seems to be that it's pretty robust while still being writable. (though anything nanoscale is not something you're going to be able to handle directly)
[1]: https://www.tampabay.com/archive/1991/06/23/holograms-the-ne...
Research can be interesting but so often none of it goes anywhere, it's just hype and there's a reproducibility crisis in academia. Look at the decades wasted on academic fraud and appeals to authority with Alzheimer's research [1].
Most of this media is the academic equivalent of "dcotors HATE This guy".
[1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC12397490/
Or, to imply guilt by association by first constructing a false stereotype of research in one field, and then applying it onto an instance of research in another field?
The price of the 50kwh unit I had put into my house was very low.
Sodium ion is ramping up too but is commercially available. That straight wasn't possible a few years ago till the electrode breakthroughs.
It was under subsidy, but I got about double what I was going to get about 6 months prior. There are 50kwh units going on AliExpress for about $12k AUD outright so I think there's been another step down in per-cell costs which is tickling through.
I'm waiting for a price cut to make outright purchases a bit more affordable but with a wholesale electricity service plan adding another say 100kWh probably works out.