Back to News
Advertisement
Advertisement

⚑ Community Insights

Discussion Sentiment

100% Positive

Analyzed from 139 words in the discussion.

Trending Topics

#different#same#books#published#normalization#book#title#matching#editions#feel

Discussion (4 Comments)Read Original on HackerNews

pona-aβ€’1 day ago
I feel like normalization would be a nightmare. Consider all the mistranscriptions, OCR errors, and different names in the libraries (case, parentheticals, etc).

If we assume there's no reliable way to define a book, maybe locally sensitive hashing could help find probably same books.

The idea is pretty cool though.

novalis78β€’about 21 hours ago
Good point. Normalization is deliberately scoped to 'what a human reads off the title page' rather than reconciling all possible metadata sources. LSH as a complementary fuzzy-matching layer for catalog reconciliation is exactly what the planned resolver at openusbn.org is designed to support: deterministic identifier as the anchor, probabilistic matching as the discovery tool.
47282847β€’about 14 hours ago
I would believe there is a nontrivial amount of books by the same author published in the same year with the same title either in different formats, different languages, and/or by different publishers.
wduquetteβ€’about 10 hours ago
Right. This would conflate, e.g., the British and American editions of books published in both countries in the same year; and they are frequently different, as might be different editions of a book published in the same year.