Comments (9)
import { object, string, number, validate } from "decode-kit";
// Example of untrusted data (e.g., from an API) const input: unknown = { id: 123, name: "Alice" };
// Validate the data (throws if validation fails) validate(input, object({ id: number(), name: string() }));
// `input` is now typed as { id: number; name: string } console.log(input.id, input.name);
When validation fails, decode-kit takes an equally thoughtful approach. Rather than being prescriptive about error formatting, it exposes a structured error system with an AST-like path that precisely indicates where validation failed. It does include a sensible default error message for debugging, but you can also traverse the error path to build whatever error handling approach fits your application - from simple logging to sophisticated user-facing messages.
The library also follows a fail-fast approach, immediately throwing when validation fails, which provides both better performance and clearer error messages by focusing on the first issue encountered.
I'd love to hear your thoughts and feedback on this approach.
would this mask any errors that would occur later in the validation?
My overall takeaway has mostly been to not optimize for the worst case by default. Keep fail-fast as baseline for boundaries and hot paths, and selectively enable “collect all” where it demonstrably saves human time.
We currently expose error as a tree structure so that it's easy to map an error to a value or build custom error messages (we only provide debug error message) and I haven't been able to come up with a satisfactory error API that accommodates multiple error paths, but you raise an excellent point. Thanks for pointing out.
Typescript language features like branded types, private constructors can make it so those values can only be constructed through the parse method.
They're really not much different, in terms of type safety*, from something like Serde.
*: they are of course different in other important ways -- like that Serde can flexibly work with all kinds of serialized formats.
https://github.com/nimeshnayaju/valleys?tab=readme-ov-file#i...
In certain cases (like validating that an input is ISO8601 format), we refine the input type to a branded type (we have a Iso8601 branded type). At runtime it's just a string, but at compile time TypeScript treats it as a distinct type that can only be obtained through validation. But, it is still not transforming or parsing the data in the way that the blog post intends, which is by design.
https://github.com/nimeshnayaju/valleys?tab=readme-ov-file#i...
https://github.com/nimeshnayaju/zod (Fork of Zod's repo which already included benchmarks comparing Zod 4 against Zod 3, so I simply integrated my validation library)
https://github.com/nimeshnayaju/valibot-benchmarks (An unofficial benchmark suggested in another comment comparing Valibot against Zod)
I think anything that declares itself as a performance improvement over the competition ought to prove it!
In the new few days, I'll prepare benchmarks to compare with Zod and Valibot!
For benchmarks, I forked Zod, which already included benchmarks comparing Zod 4 against Zod 3. Here are the results (in README.md) if you're interested:
https://github.com/nimeshnayaju/zod
I noticed significantly better performance than both Zod 3 and Zod 4 across most validation scenarios (especially validations that involved rules like min/max length, etc), with the exception of simple object parsing.
I also forked another benchmark suggested in a different comment if you're interested (I noticed similar results).
https://github.com/nimeshnayaju/valibot-benchmarks
I have included the results I obtained from running the benchmark in the README.md. I'd love for you to also take a look at the changed code to see if I may have missed something with the integration. Curious to hear your feedback too!
https://github.com/nimeshnayaju/zod (Fork of Zod's repo which already included benchmarks comparing Zod 4 against Zod 3, so I simply integrated my validation library)
https://github.com/nimeshnayaju/valibot-benchmarks (An unofficial benchmark suggested in another comment comparing Valibot against Zod)
https://github.com/nimeshnayaju/zod (Fork of Zod's repo which already included benchmarks comparing Zod 4 against Zod 3, so I simply integrated my validation library)
https://github.com/nimeshnayaju/valibot-benchmarks (An unofficial benchmark from another user comparing Valibot against Zod)
They show a lot of potential and promising results. Any guess why Valley is slower for parsing objects with primitive values?