Blog

Revisiting Sprecher's Proof of Kolmogorov's Superposition Theorem

July, 2025 | 7 min read
An exploration of one of mathematics' most influential failures: David Sprecher's 1996 attempt to make Kolmogorov's superposition theorem computationally practical. This venture consequently sparked a decade of rigorous investigation, revealing fatal issues with monotonicity and continuity, before ultimately leading to Köppen's recursive solution and the modern breakthroughs in Kolmogorov-Arnold Networks that power (at least has strong potential) today's machine learning applications.
Mathematical History Function Approximation Neural Networks Kolmogorov Theory

Mind The Trap: Verification Principles in LLMs for Automated Scientific Discovery

June, 2025 | 9 min read
This exploration examines the potential parallels between verification principles central to Logical Positivism and contemporary approaches to ensuring reliability in LLM outputs for scientific discovery. We identify four key verification traps that may constrain LLMs' scientific potential and propose an alternative approach inspired by Epicurus' principle of multiple explanations. The discussion culminates in a research proposal for developing a balanced framework that ensures reliability while preserving LLMs' capacity for creative scientific thinking.
Verification Principles Scientific Discovery Logical Positivism Research Framework

Towards a Taxonomy of Logic for a Better Understanding of the Ostensible Reasoning of LLMs

June, 2025 | 13 min read
In this post, we present a comprehensive taxonomy of logical reasoning, systematically charting the landscape from fundamental deductive and non-deductive frameworks to specialized logical systems and meta-logical properties. Building on this structured taxonomy, we then explore the implications of this mapping for understanding and evaluating reasoning processes in LLMs. The discussion is anchored in the goal of establishing clearer conceptual boundaries for assessing LLM reasoning performance.
Logical Taxonomy Machine Reasoning Meta-logical Properties

Exhaustive-Meta-Metrics for LLM Hallucination Assessment: A Comprehensive Taxonomy

April, 2025 | 21 min read

Evaluating hallucinations in LLM outputs is anything but straightforward. Over the past few years, researchers have developed a wide array of metrics—from ROUGE and BLEU to embedding-based and graph-based techniques—each with its own strengths and blind spots. This post walks through a structured taxonomy of these metrics, classifying them by lexical, semantic, factual, logical, and pragmatic dimensions.

But we do not stop there. The latest addition to this taxonomy is a unified Meta-Metric Framework that integrates diverse metrics into a single, adaptive evaluation pipeline. Inspired by ensemble learning, this framework uses dynamic weighting, dimensional aggregation, and task-aware scoring to deliver a more robust, interpretable, and generalizable assessment of hallucination across domains. Whether you're working on summarization, open-domain QA, or factual dialogue systems, this meta-metric approach offers a practical path toward better model accountability and fidelity.

Hallucination Detection Meta-Metric Framework Multi-Dimensional Metrics