Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

74% Positive

Analyzed from 4989 words in the discussion.

Trending Topics

#read#memory#paper#storage#write#years#tier#research#more#density

Discussion (150 Comments)Read Original on HackerNews

bastawhiz1 day ago
Every year or so there's a new article about some new spectacular storage medium. Crystals, graphene, lasers, quartz, holograms, whatever. It never materializes.

Demonstrating this stuff is possible isn't the hard part, it seems. Productionizing it is. You have to have exceedingly fast read and write speeds: who cares if it can store an exabyte if it takes all month to read it, or if you produce data faster than you can write it? It has to be durable under adverse conditions. It has to be practical to manufacture the medium and the drives. You probably don't want to have to need a separate device to read and a device to write. By the time most of these problems are worked out, most of these technologies aren't a whole lot better than existing tech.

Stick this on the "Wouldn't it be nice if graphene..." pile.

nine_k1 day ago
It took 15 if not 20 years to commercialize even such obvious, low-tech thing as radio telegraph, which can literally be built form common house supplies. It happened about 60 years after Maxwell predicted the electromagnetic waves theoretically.

Red LEDs were invented / discovered in 1920s, became commercially successful as indicators in 1960s. Optical fibers were invented in 1920s or so, became a commercial success in 1980s.

Certain things just take time. Do not dismiss a good physical effect, they are much more rare than so-called good ideas.

bastawhizabout 12 hours ago
It's a good physical effect that doesn't obviously solve any real problems. Consider: 5D optical storage is thirty years old and SOTA transfer speed is about as many megabits per second. By the time it's fast enough to even approach a speed that's commercially useful, all of the other tech we have will have continued to progress. That's not to mention how fragile the quartz disks are. It's a real physical effect that doesn't solve problems.

We already have zero retention energy storage. The phenomenon in this paper isn't even all that new by the author's own admission—that's how it got to the fifty third revision. The tier 2 setup described here is purely speculative. Producing a single square centimeter of pure "fluorographane" (sic?) is still a task that would be exceedingly challenging for a research lab. And it's not clear how much energy it would take to read and write the data, or support the hardware necessary to do it at a speed that's makes it uniquely useful. Even if all of these problems are solved, and the cost is made reasonable, it's still completely unclear if it would be substantially better than what we have today.

nurettin1 day ago
It doesn't take long to commercialize feasible new tech in this day and age. If someone invented an electromagnetic hovercar tomorrow, it will be available for sale next week and regulations will follow after.
TOMDM1 day ago
Waymo has cars that drive themselves and are dramatically safer than people in most conditions and yet they're only in select cities.

Do you just think Google hates money, or does this only work for hover cars

atoav1 day ago
What advantage would hovering have?
antonvs1 day ago
> It doesn't take long to commercialize feasible new tech

“Feasible” is doing some heavy lifting there. The whole point of the comment you replied to is that it can take a long time for some new physical technique to become commercially feasible.

noosphr1 day ago
The only technologies that are commercialised quickly today are the ones that can be commercialised quickly. The ones that can't won't be for decades yet.

In short, if a tech takes 40 years to be commercialised it would have been invented some time in the 80s.

bitexploder1 day ago
It feels a little disjointed to compare old tech. Computing tech iteration cycles and adoption rates seem more interesting than things at the dawn of communications technology.
staplers1 day ago
Communication technologies have been evolving for billions of years
loneboat1 day ago
> who cares if it can store an exabyte if it takes all month to read it

To be fair, if I'm reading an exabyte in a month, my hardware's pushing >3 Tbps, which I'd be very happy with.

Firerouge1 day ago
Plus just put 32 in stripping RAID if you really need to read an exabyte a day
footaabout 6 hours ago
Eh, that doesn't math out. It's the bandwidth per storage density (or ultimately per price) that matters.

If you have great cost per byte but your bandwidth per byte is bad enough that the price per byte doesn't make up for it then you have an issue.

They've started making hard drives with multiple heads because of this issue, they increased density to the point where it's not useful to continue adding density if it doesn't come with more bandwdith.

brookst1 day ago
*RAED

Or maybe RAEND

bastawhiz1 day ago
But if you need 1eb, waiting a whole month for it isn't great. You'd be better off with 720 1pb devices taking an hour in parallel.
Dylan168071 day ago
Yes it causes problems in this increasingly narrow situation.

Massive storage that takes a month to fully read is acceptable in a wide variety of use cases. If it's cheaper than hard drives it'll get a huge amount of users.

herodoturtle1 day ago
In long term archival use cases this is less of an issue. Especially if it’s many exabytes we’re talking about, needing to be stored for decades.

But I 100% agree with your main point about possibility vs productionisation.

rcxdudeabout 24 hours ago
Well, yeah. It takes a heck of a long time to pull something out of the lab, let alone theory, into the real world, and there's a ton of ways that it can die along the way. But you do need people to be pursuing these things to actually get something into production, else there really would never be any progress. To me this reaction feels a bit of a misunderstanding about why it's worth discussing these ideas at all: it's not meant to be a forecast of where technology is definitely going in the future, it's a potential direction that some people think is worth pursuing, and even if the odds are low for any given idea it doesn't make them worthless. (I've worked for near 10 years to turn something that 'worked in the lab' when I joined into an actual product, for example, and it's still not quite standing on its own feet in production yet).

I'm not familiar enough with the space to know how this idea rates compared to alternative options at similar levels of development: the density is obviously extreme (but probably not the biggest advantage), and it makes sense to me that the underlying physics could work robustly, but the practicalities of how you read and write seem pretty difficult (and I think the paper kind of glosses over this: read caching and defect mapping could be trickier than it implied. Accessing the tape from both sides also seems like it will make the engineering more difficult).

cameldrv1 day ago
I have no idea if this is practical but I remember when flash memory was this suspicious semi-science fiction thing too. There are probably some people on this site that remember the same for DRAM. There have been loads of things in between that didn't make it. Some of them were semi-crackpot, some actually went into production like bubble memory and Optane. Few of them have met the sweet spot of the market in a way that let them move from a niche to a dominant form of memory, but still I wouldn't discount that it's possible to invent a new form of memory that will take over the world!
adrian_babout 23 hours ago
Most kinds of memory devices are based on old principles of making a memory device, which are applied to new materials.

I do not think that any new memory device principles have been invented after WWII. Already by 1940, the inventor of DRAM, John Vincent Atanasoff, had enumerated almost all principles that can be used to make a memory device.

The first DRAM of Atanasoff was made with discrete capacitors, then 5-years later von Neumann proposed to use iconoscope cathode-ray tubes instead, which were used for a few years, before being replaced by magnetic core memories. The Intel company was formed for the commercialization of the first (1-kbit) DRAM integrated circuit made with MOS transistors.

The memory described in TFA is in principle equivalent with a memory made with mechanical toggle switches or latching relays with mechanical latching, where the 2 stable states are maintained by elastic forces and you can toggle the state if you apply a force great enough on the switch.

Reducing a mechanical bistable device to the size of a few atoms reaches the possible limit of memory density. As described in the parent article, this device should be able to store information safely and it should be able to switch is state quickly.

The difficulties are not in the memory cell itself, but in how to enable fast and accurate reading and writing. While the memory cell itself may have the minimum size permitted by the atomic structure, there is no way to miniaturize to the same extent any kind of reading and writing interfaces, so that they could be incorporated in the memory cell, like in an SRAM cell.

Therefore the only solution that can preserve the high cell density is to have a read/write head that is shared by a great number of cells, i.e. which must be moved in order to access different cells.

So the memory, at least within some block, must have mechanical access, so it must be implemented as a tape or a disc. Multiple heads could be used to increase the read/write speed, like also for magnetic memories.

So I do not think that there is much to criticize in this paper, it makes sense and it identifies a new material that is suitable for implementing a known kind of memory cell at an atomic scale, even if it is unlikely that a practical memory based on this concept will become possible any time soon.

Microsoft has worked for many years on their glass memory devices, which have much more important advantages, and they are still far from being able to sell such devices, mainly due to the cost of the required lasers, for which there is a chicken-and-egg problem, they are very expensive because they are produced in very small quantities and they cannot be incorporated in a device intended for mass production, because they are too expensive.

tw041 day ago
> You probably don't want to have to need a separate device to read and a device to write.

I don’t think this would bother the average enterprise in the least. We used to have entire rooms dedicated to tape libraries that housed dozens of tape drives and thousands of tapes each.

The read and write speed are absolutely critical but having to utilize multiple devices isn’t anything new at all.

skycrafter01 day ago
Used to? We absolutely still do. LTO is a widely used format, and as far as I'm aware, it is "picking up more steam" each year.
tw04about 12 hours ago
I didn’t mean to imply that tape is dead despite the 40 years of insert new technology claiming they’ve finally killed tape.

I more meant we no longer have room sized libraries unless the cloud providers have commissioned something custom and not available to the public. I believe the last installed powderhorn I’m aware of was decommissioned almost a decade ago now.

https://www.iscgroupllc.com/products/storagetek/storagetek-p...

Dylan168071 day ago
In terms of capacity, LTO sales are increasing. In terms of tape count and drive count, there's been a steady decline.
bastawhiz1 day ago
It doubles design, development, and manufacturing cost, potentially doubling your supply chain. It's not a problem for the consumer.
s0rce1 day ago
Basically you just ignore the hyped up press releases, this just accompanies most semi-cool/exciting papers. The scientists probably know this isn't going to be some new storage that will become widespread but its just part of the game to sell the story like this and the administration wants this.
lijokabout 22 hours ago
> who cares if it can store an exabyte if it takes all month to read it > You probably don't want to have to need a separate device to read and a device to write

Are you only thinking about home consumer applications?

volemoabout 22 hours ago
I’m not sure what the GP is thinking, but I would love a cheap-ish exabyte storage even if it takes a month to read fully. Damn, I’d gladly take it even if the speed is comparable to an SSD! (Though the price would be a question of course.)
hn_throwaway_991 day ago
> Every year or so there's a new article about some new spectacular storage medium. Crystals, graphene, lasers, quartz, holograms, whatever. It never materializes.

Of course, wouldn't you expect that for a fairly mature technology that you'd get tons of false starts from competing tech before eventually getting one breakthrough that completely changed everything? I mean, you could have written a comment that was perfectly analogous to your paragraph above about how AI and neural networks never really amounted to much for about 50-60 years until, all of the sudden, they did (and even if you think AI may currently be overhyped, it's undeniable that in the past 5 years that AI has had an effect on society probably much greater than all the previous history of AI put together).

I prefer to read this academic paper as "Oh, this is a really interesting approach, I wonder what its limitations are" vs. interpreting at as a "this new storage tech will change the world!!!" announcement. I feel like the first approach leads to generally more curiosity, while the second just leads to cynicism and jadedness.

tngranadosabout 19 hours ago
I remember my father showing me one of those articles when I was a kid about a postal stamp size, thin and lightweight new memory system. I remember we were as doubtful then as you are now. A few years later I remember that moment while switching the micro SD card of a camera… Sometimes this breakthroughs turns out to be exactly as they are told
bastawhizabout 12 hours ago
I remember reading the same stories about 5D optical storage. In 1996. It's still the same vaporware.

Flash, on the other hand, had made steady incremental progress from the time it was first described until it was fully commercialized.

bawolff1 day ago
In fairness, i assume any headline that emphasizes some excessively large storage density is probably at best something useful for archiving and not a replacement for an SSD. If they were targeting latency they would lead with those numbers not the density.
moconnor1 day ago
Very large, fast, read-only memory now has an incredible use-case: NN weights.
bastawhizabout 12 hours ago
And that's described in this article is the opposite of fast! All of these technologies are
ijustlovemathabout 17 hours ago
1 exabyte/month is 380GB/s, which would be pretty epic in my opinion!
qingcharles1 day ago
The fact that most of the world's data is still stored on little spinny disks, considering how many times in the last 40 years we've seen this story, is criminal.
storus1 day ago
Aren't lasers driving the current 32TB+ HDD tech?
serf1 day ago
yeah but that wasn't a straight upgrade, either. HAMR has all sorts of tradeoffs.
SubiculumCode1 day ago
Every article like this there is someone that points this out. Not hard to do but sure is reliable.
bastawhizabout 12 hours ago
And the failure of the technologies to deliver is equally reliable!
bell-cotabout 23 hours ago
The hard work would be maintaining a database of ideas which were similarly hyped over the past (say) couple centuries - including details on if/when each idea worked out, or fell out of hype-space, or was proven useless.

From that, you might be able to draw useful conclusions. Well...you'd also need correction factors for how profitable the hype itself was, over time, in the various scientific & technical fields.

The business model would be selling db access to VC's, R&D managers, and other folks making decisions about real money.

MrEldritch1 day ago
The concept is interesting, but I'm getting a lot of red flags from this - there's no experimental data or proof-of-concept work at all, which makes this feel more like a blue-sky "Look what we could do if we could arrange atoms however we wanted!" pipe dream in the Drexlerian mode. Something about the writing style's also pinging my LLM radar, which while not disqualifying in-and-of-itself is very discouraging in combination with the other funkiness. The chemistry and manufacturability strike me as questionable in particular, and I'm not convinced the physics of reading and writing are nearly as clean as the author seems to think.

(I'm also unclear how the bit is supposed to actually flip under the applied electric charge without the fluorine and carbon having to pass through each other.)

iliatoli1 day ago
The fluorine doesn't pass through carbon. It passes between two neighboring carbons through a C-C gap of 2.64 Å at the transition state. This is pyramidal inversion — the same mechanism as ammonia (NH₃), but with a 4.6 eV barrier instead of 0.25 eV. The transition state geometry is computed and verified with one imaginary frequency.
gus_massa1 day ago
> with one imaginary frequency

Technical note, because it's jargon:

"Real" means position = A * sin(w * t)

"Imaginary" means position = A * expt(w * t)

(because expt(w * i * t) = cos(w * t) + i * sin(w * t))

If you calculate in a computer an ammonia molecule with all the atom is a plane z = 0 (instead of the usual piramidal shape), then the N in the center is in an inestable equilibrium and the N does not make small vibrations like z = expt(w * t).

It makes a big "imaginary" vibration like z = expt(w * t) that is exponential for a short time while z is almost 0, and then the approximations don't apply and it reach the z of the usual shape at equilibrium.

dgfl1 day ago
This is a pipe dream and I’m almost tempted to say a fever dream. The chemistry part seems somewhat sound, even though that’s outside of my field of expertise. But the entire readout process is questionable, and has clear signs of heavy AI writing.

The AFM mechanism described as “tier 1” (very strong LLMism, btw) is somewhat optimistic but realistic. The fields needed are large compared to usual values in solid state devices, but I’d guess achievable with an AFM. But “tier 2” is vague and completely speculative. Some random things I noted: - handwaving that (not exact quote) “the read controller is cached. No need to read the same bit twice”. Cached with what?? If this miraculous technology can achieve 25 PB/s, what can possibly hope to cache it? More generally, it’s a strange thing to point out. - some magic and completely handwaved MEMS array that converts an 8um spot size laser beam into atomic-resolution 2D addressing? In my opinion this is the biggest sin of the manuscript. What I understood to be depicted is just fundamentally physically impossible. - a general misunderstanding of integrated electronics, and dishonest benchmarking, comparing real memory technologies being sold at scale right now, vs theoretical physical bounds on an untested idea. Also no mention of existing magnetic tape as far as I can tell. - constantly pulling out specific numbers or estimates with no citation and insufficient justification. Too many examples to even count.

I’m sorry for the harsh language, I wouldn’t use it for a usual review. But in my opinion this needs a very heavy toning down and complete rewrite, and is unfit for a proper review. Final remark: electronics is, and will always fundamentally be, intrinsically denser than optics. Some techniques “described” here, if they were possible, would have been applied to existing optical tech (i.e. phase change materials in blue-ray).

cynicalkane1 day ago
Yes, this paper is insane. The actual quote about caching is:

> Once a region of tape has been read, the controller stores the result. Subsequent operations reference the cache rather than re-interrogating the physical medium. Re-reading a known bit is unnecessary; the controller already holds its state

However, earlier, the paper claims:

> The transformer architectures underpin- ning modern large language models are bandwidth-limited, not compute-limited [1–3]. The energy consumed moving data between DRAM, NAND flash, and processor cache already exceeds the energy consumed by arithmetic in datacenter AI accelerators [2]. This is not an optimization problem. It is a materials problem [emphasis mine].

as part of a longer rant about the AI "memory wall" in the very first section. If we open with a long spiel about how memory is expensive in material cost and energy cost and this material is a solution for that then what are we caching the read in? On that note, what kind of computer engineer thinks about cache on the order of individual bits on a medium?

And, as you point out, 25 PB/s is a lot. Around 1000x that of a typical on-die SRAM cache, I think.

A while later, the author speaks of using atomic force microscopy to read the data back. The size of AFM scans are, in practice, as I understand, along the order of square micrometers. I think this whole paper is an AI-driven, as you put it, 'fever dream', enabling an author to put forth 60 pages of sciencey claims and sciencey math without -- as far as I can tell -- any concrete and novel scientific result of any kind. AI-driven reality warps are not new; the difference is nowdays AIs are good enough at sounding smart to get past the barriers of a typical smart person who might want to be fooled or make a show of being open-minded. Later on, the author proposes using "shaped femtosecond IR pulses" -- without further elaboration -- to address single atoms! IR wavelengths are on the order of a micrometer at minimum!

zozbot234about 22 hours ago
> Yes, this paper is insane.

Given the amount of AI writing involved, I'm pretty sure that you actually meant "this paper is inane". Or maybe both!

iliatoli1 day ago
Author here. Some fair points, some misreadings.

The caching comment refers to the Tier 1 controller holding a bitmap of bits it has already scanned — standard practice in any scanning probe system. It's not competing with the storage medium for capacity.

Tier 2 is explicitly labeled speculative. The paper's validation target is Tier 1: one C-AFM scan, one voltage pulse, existing equipment.

The core contribution is not the architecture — it's the physics: a verified transition state for C-F pyramidal inversion at 4.6 eV (B3LYP) and 4.8 eV (CCSD(T)), one imaginary frequency, barrier below bond dissociation. That's standard computational chemistry, not handwaving. The architecture sections are forward-looking by design.

The fluorine passes between two carbon neighbors through a C-C gap of 2.64 Å at the transition state — not through any atom. This is pyramidal inversion, the same mechanism as ammonia, but with a 4.6 eV barrier instead of 0.25 eV.

Magnetic tape comparison is in Table 2.

rogerrogerr1 day ago
Dude, you _have_ to write things in your own words if you want to be taken seriously. "The <x> is not <y> — it's <z>" will cause a bunch of people to disengage, and those people have high overlap with the people who may fund you.
topspin1 day ago
"Dude, you _have_ to write things in your own words if you want to be taken seriously."

How is this lost on people? Everything that contains the slightest hint of "AI slop" is instantly panned anywhere it appears, and yet people such as Ilia Toli appear to be entirely oblivious to this.

It's tragic. There is at least a non-zero chance that this work is a world changing breakthrough. It's clear, based on his engagement with comments here, that he at least believes this. And yet the first thing the guy does with it is debase it all using a clanker.

It boggles the mind.

We're seeing this throughout academe, in courts with both lawyers and judges, and among lawmakers and journalists. Several times a week one or another of these makes another headline for misapplying "AI". It seems that the work for which we are all expected to have the highest regard is coming from people that are completely witless; both unaware of how transparent this is and unaware of the consequences.

You have to be deeply ensconced inside an impenetrable bubble to do that to yourself.

Animats1 day ago
"A scanning-probe prototype already constitutes a functional non-volatile memory device with areal density exceeding all existing technologies by more than five orders of magnitude."

Does that mean a scanning tunneling microscope is the I/O mechanism? That's been demoed for atom-level storage in the past. But it's too slow for use.

iliatoli1 day ago
Yes, Tier 1 is scanning probe — C-AFM specifically. Slow but sufficient for proof of concept. The paper describes a Tier 2 architecture using near-field mid-IR arrays for parallel read/write, projecting 25 PB/s aggregate throughput. Tier 1 proves the physics. Tier 2 is the engineering path to speed.
ilaksh1 day ago
What do you need to build a demo of Tier 2? I am guessing if you can do that then you can get an investor.
iliatoli1 day ago
Tier 2 requires near-field infrared optics at sub-10 nm resolution — that's active research in several groups but not commercially available yet. The immediate next step is Tier 1: one C-AFM image proving the read, one voltage pulse proving the write. That's $300 in materials and access to an AFM. Already in progress with a collaborator.
rowanG0771 day ago
Using a mid-IR array with sub 10nm resolution is anything but an engineering path. Tech like that has never left the lab afaik.
iliatoli1 day ago
Fair point. That's why the paper labels it Tier 2 (near-term research) rather than Tier 1 (existing instrumentation). Tier 1 — scanning probe read/write on a single sample — is the immediate validation target and requires no new technology.
mkprc1 day ago
Sniff test: a paper with a single author and 53 revisions, listing a gmail address as contact information despite the author, after a brief internet search, appearing to have affiliations with CSU Global, (maybe) the University of Central Florida, and the San Jose State University Department of Aerospace.
iliatoli1 day ago
Author here. Three PhDs (Mathematics, Pisa; Quantum Chemistry, UCF; Materials Science, UTD — in progress), plus MS degrees from SJSU and CSU. The gmail is because this is independent work, not affiliated with any institution. v53 reflects thirteen years of development since the original 2013 publication (Graphene 1, 107–109). The barrier is verified at two independent levels of theory with a confirmed transition state. Happy to discuss the physics.
ricardobeat1 day ago
That’s amazing. Do you have a home lab with an atomic microscope where you do your research?

And what’s the reason for going solo vs a research university, where I assume this type of research could be significantly sped up?

iliatoli1 day ago
No lab — the work is computational. All calculations run on a Dell Precision workstation with ORCA (quantum chemistry) software. An experimental collaborator is now preparing the C-AFM validation. The solo approach is a consequence of the work spanning multiple fields that don't share a single department.
gus_massa1 day ago
What were the topics and titles of your dissertation in the first two PhD? Were they related to this topic or totally different?

Edit: https://www.mathgenealogy.org/id.php?id=61429 It looks quite unrelated

iliatoli1 day ago
First PhD: algebraic cryptanalysis (Pisa). Second PhD: exact solutions to the Schrödinger equation for few-body systems (UCF). Both unrelated to fluorographane — the connection emerged later.
edfletcher_t1371 day ago
This is their referenced 2013 paper on the subject:

https://www.researchgate.net/publication/258423577_Data_Stor...

Clearly they have been working on this for over a decade.

hgoel1 day ago
Is there a reason you went for 3 PhDs? Especially since they're all in STEM? To me it's a red flag because the point of a PhD is to learn to do research, you don't need to get another one to move between fields (especially within STEM), just need to do research with people in those fields and gain experience.
iliatoli1 day ago
Each PhD was in a different country and decade. Mathematics (Pisa, 2000s), Quantum Chemistry (UCF, 2010s), Materials Science (UTD, now). The fluorographane work exists because all three converge — the barrier calculation is quantum chemistry, the proof structure is mathematics, and the material is materials science. I didn't plan it this way.
juleiie1 day ago
Some people actually enjoy studying and learning in these spaces. Does everything have to be optimized for?
foota1 day ago
Hey -- I have 0 PHDs so take this with a grain of salt :)

I had thought for a while about a way to store data that makes use of an idea that I had for sub-diffraction limited imaging inspired by STED microscopy.

First an overview of STED. You have a "donut" shaped laser (or toroidal laser) that is fired on a sample. This laser has an inner hole that is below the diffraction limit. This laser is used to deplete the ability of the sample to fluoresce, and then immediately after a second laser is shone on the same spot. The parts of the sample depleted by the donut laser don't fluoresce and so you only see the donut hole fluoresce. This allows you to image below the diffraction limit.

My idea was to apply this along with a layer in the material that exhibits sum frequency generation (SFG). The idea is that you can shine the donut laser with frequency A and a gaussian laser with frequency B at the same spot. When they interact in the SFG material you get some third frequency C as a result of SFG. Then, below that material would be a material that doesn't transmit frequencies C and A.

Then what you'd be left with after the light shines through those two layers is some amount of light at frequency B. The brightness inside the hole and outside of the hole would depend on how much of the light from frequency B converts into frequency C. Sum frequency generation is a very inefficient process, with only some tiny portion of the light participating, but my thinking is that if laser B is significantly less bright than laser A, then what will happen is that most of the light from laser B will participate in sum frequency generation where it mixes with laser A, and that you'll be left with only a tiny bit of laser A outside of the hole, so that you get a nice contrast ratio for the light at frequency A between the hole and the surroundings that then allow you to image whatever is below these layers below the diffraction limit.

In my idea the final layer is some kind of optical storage medium that can be be read/written by the laser below the diffraction limit. Obviously aiming this would be hard :) My idea was that it would be some kind of spinning disk, but I never really got to that point.

YZF1 day ago
Curious if you've patented this? Very cool. The physics is way beyond me but I understand that each atom in the crystal can be in two states? And those are stable? There is no cross talk or decay at all?

You're comparing to current memory technologies but there are also some optical technologies like AIE-DDPR which presumably is (a lot?) less dense but has layers (I noticed you're also discussing a volumetric implementation), would devices based on your technology be simpler/faster? (I guess optical disks don't intend to replace high speed memory). What about access times?

iliatoli1 day ago
Patent strategy is under consideration. Happy to discuss offline — ilia.toli@gmail.com.
_alternator_1 day ago
Have you considered subjecting this to expert scrutiny by submitting to a journal? That's probably better than getting hot takes on HN by random technology enthusiasts, skeptics, anon experts, and trolls.
tux31 day ago
Realistically I don't see how this could be submitted to a journal as-is.

I'm sure you could take this material and write a couple papers out of it, but right now this is a 60 page word document with commentary on a variety of topics from memory market economics to quantum computing.

It's full of self-congratulatory language like "The transition is not an incremental improvement within the existing paradigm; it obsoletes the paradigm and the infrastructure built around it". Alright, I'm happy to believe that this work is important. But this is not the neutral tone of a scientific article, it reads like ad copy for a new technology.

I'm sure there's interesting physics in there, but it needs a serious editing effort before it could be taken seriously by a journal.

iliatoli1 day ago
It's under peer review at Physica Scripta (IOP) since March 25. HN is for visibility, not validation.
ilaksh1 day ago
Sniff test as in you turned your nose up without even looking at it on a purely surface level based on affiliation.

Smells like laziness to me.

smallerize1 day ago
There's no point spending time wading into every crackpot paper. The volume is too high. I'm not saying this specific paper is junk, but I don't blame people for having a quick filter.
doctorpangloss1 day ago
I suppose anyone can run the same computer simulations.
iliatoli1 day ago
Yes — the input files, level of theory, and software (ORCA 6.1.1, free for academics) are all specified in the paper. The calculations are fully reproducible.
aperrien1 day ago
Remarkable. If this material works and is flexible enough, we could someday see tape drives with hundreds of exabytes of capacity.
iliatoli1 day ago
Author here. The paper describes exactly this — a nanotape spool architecture with volumetric density of 0.4–9 ZB/cm³. Section 4.4 in the preprint.
ethmarksabout 22 hours ago
Pardon my ignorance, but why does the "447 TB/cm^2" density value use square centimeters instead of a volume unit? Does the information capacity of this material really scale in proportion to area? How? Or is it just a typo?
iliatoliabout 22 hours ago
fluorographane is a single atomic layer — one carbon thick — so storage density is naturally per unit area. The paper also gives volumetric density for the nanotape spool architecture (0.4–9 ZB/cm³, Section 4.4).
est1 day ago
Perhaps title had a typo?

fluorographane -> Fluorographene

Can't find a single page about fluorographane

https://en.wikipedia.org/w/index.php?search=fluorographane&t...

But this

https://en.wikipedia.org/wiki/Fluorographene

iliatoli1 day ago
Not a typo. Fluorographene is the sp² form (Nair et al. 2010). Fluorographane uses the -ane suffix to denote full sp³ saturation — same convention as graphene → graphane. The sp³ hybridization is what creates the bistable C-F orientation that stores the bit.
est1 day ago
TIL thanks!
gnabgib1 day ago
Fluorographane: Synthesis and Properties (pdf)https://pubs.rsc.org/en/content/getauthorversionpdf/C4CC0884...
dazhbog1 day ago
Looks interesting but cant take it seriously when there are so many red flags of LLM style writing. The author continues to use AI to even reply to comments in this thread. (its not X its Y, Em Dashes, etc.)
dlev_pika1 day ago
The Now I Get non-technical version, because I need someone to explain this to me x)

https://nowigetit.us/pages/d7f94fd0-e608-47f9-8805-429898105...

adam_patarinoabout 23 hours ago
Fluorographane is that stuff in factorio space age, no?
Advertisement
Rahulghoti1 day ago
This is like making the big companies lose their profit money since they still have their old products in their warehouses why would they want this tech to come that's why it never materializes
cluckindan1 day ago
Any sufficiently advanced technology is indistinguishable from magic, as proven by the number of comments treating the paper as an AI slop pipe dream.
d--b1 day ago
I don’t understand the comments here. They say in the last paragraph:

> A scanning-probe prototype already constitutes a functional non-volatile memory device with areal density exceeding all existing technologies by more than five orders of magnitude.

Are we supposed to read all these stories as lies?

Now it doesn’t say that this is easy to produce, but if those claims are true, it doesn’t really matter if it is very expensive.

It doesn’t say either if the stuff can withstand live conditions.

It’s annoying not to be able to trust whether solutions like these are viable or not.

iliatoli1 day ago
The scanning-probe claim is real — C-AFM on fluorographane is achievable with existing commercial instruments. The paper is a computational prediction with a detailed experimental protocol. An experimental collaborator is preparing the validation now. The 'live conditions' question is addressed in Section 5 (radiation hardness, mechanical damage, defect physics).
rcxdudeabout 23 hours ago
By itself the density of such a system would just be an interesting superlative: the paper itself references people who have achieved similar densities in the lab, but it's not necessarily useful if the read and write are slow and the total addressable area is miniscule: both things I would expect from the described proof-of-concept (the main point of which would be to demonstrate that the storage works at all, and maybe to evaluate its robustness to some degree).

You should not expect that even the best of ideas at this stage are going to turn into products on any reasonable timescale, this is at a super early level of development and there are probable more things that can go wrong than you are imagining. But the paper shows there has been a good amount of effort at this stage to evaluate the robustness of the storage: the whole reason for this particular arrangement seems to be that it's pretty robust while still being writable. (though anything nanoscale is not something you're going to be able to handle directly)

next_xibalba1 day ago
Too long, not gonna read. When do I get my 447TB iPhone?
jmyeet1 day ago
Yeah, I've been baited by "breakthroughs" in storage technology for almost 40 years at this point [1]. I'll believe it when it's in Best Buy. Battery "breakthroughs" have really taken up the mantle of headline-grabbing research fund-raising articles so it's nice to see a throwback to the OG: storage.

[1]: https://www.tampabay.com/archive/1991/06/23/holograms-the-ne...

anigbrowl1 day ago
I am about the same age and tarted loading programs off cassette tapes. The fact that I can get a terabyte of storage in a micro SD card the size of my pinkie nail for under $200 still impresses me.
timcobb1 day ago
This is research...
jmyeet1 day ago
It's always "research". I put that in quotes because any press like this isn't really "research", it's "fund-raising". It's the academic game of getting papers into the right publications, getting "street cred" by getting the right heavyweights as co-authors and to cite you, to become a "heavyweight" by doing the same thing and ultimately getting more grants to perpetuate the cycle.

Research can be interesting but so often none of it goes anywhere, it's just hype and there's a reproducibility crisis in academia. Look at the decades wasted on academic fraud and appeals to authority with Alzheimer's research [1].

Most of this media is the academic equivalent of "dcotors HATE This guy".

[1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC12397490/

cluckindan1 day ago
Do you think it’s logically sound to marry the ”no true Scotsman” to a strawman argument?

Or, to imply guilt by association by first constructing a false stereotype of research in one field, and then applying it onto an instance of research in another field?

XorNot1 day ago
I mean battery breakthroughs are real though? BYD is now demoing 0-80% in 5 mins on production vehicles in China.

The price of the 50kwh unit I had put into my house was very low.

Sodium ion is ramping up too but is commercially available. That straight wasn't possible a few years ago till the electrode breakthroughs.

golem141 day ago
Do you have any pointers on said 50kWh battery? Asking for a friend.
XorNot1 day ago
This was the group who did it for me in Australia: https://voltxenergy.com.au/

It was under subsidy, but I got about double what I was going to get about 6 months prior. There are 50kwh units going on AliExpress for about $12k AUD outright so I think there's been another step down in per-cell costs which is tickling through.

I'm waiting for a price cut to make outright purchases a bit more affordable but with a wholesale electricity service plan adding another say 100kWh probably works out.