Pyrefly vs. Ty: Comparing Python's two new Rust-based type checkers

2025-05-2715:01394160blog.edward-li.com

A deep dive into Meta's pyrefly and Astral's ty - two new Rust-based Python type checkers that both promise faster performance and better type inference.

Note: I like using em-dashes while writing! Don’t worry, this is not written by AI. (context)

Earlier this month, two new Rust-based Python type checkers hit the spotlight: pyrefly and ty. Although neither is officially released, they are a welcome change to the Python type checking world, historically dominated by mypy and pylance.

While both are open-source and publicly downloadable for quite some time, there have not been any official announcements by Meta nor Astral on their brand new next-generation Python type checkers — until last week.

At PyCon 2025, nestled away in a quiet Room 319 at the Typing Summit, we had our first official sneak peek into both of these tools — the team behind them, their goals, visions, and ambitions — and their unique approaches to tackling Python’s typing problems.

ty introduction presentation at PyCon 2025
ty team presenting at the typing summit

This blog is a collection of rough notes scribbled during the event, personal conversations with the team, and not-too-thorough experiments that I’ve run myself. As such, some details might be a little blurry.

Also, both of these tools are still in early alpha!

Please do not use this as a definitive judgment as to which one is better and/or worse. This blog is just for fun to see what state the two tools are at now!

The following tests and experiments were performed on the latest versions of pyrefly, ty, mypy, and pyright as of writing this blog:

  • pyrefly 0.17.0
  • ty 0.0.1-alpha.7 (afb20f6fe 2025-05-26)
  • mypy 1.15.0 (compiled: yes)
  • pyright 1.1.401

Pyrefly

Pyrefly is Meta’s new Rust-based Python type checker, replacing Pyre — Meta’s previous Python type checker written in OCaml. The hopes are that Pyrefly should be faster, more portable, and more capable compared to Pyre.

One key thing the Pyrefly team made very clear this year is that they want to be truly open source. Pyre was also technically open source, but it was more of a “we built this for our needs, but here’s the source code if you want it”. In contrast, one of the foundational goals of Pyrefly is to be more engaged with the needs of the open-source community.

pyrefly introduction presentation

ty

ty is also a Rust-based Python type checker currently under development by Astral, the team behind uv and ruff. The project was formerly known as Red-Knot, but now has its official name: ty. Compared to Meta, Astral is a lot more quiet on its announcement: just a soft launch on GitHub, a quick 30-minute presentation, and a couple of blog articles as podcasts here and there.

ty introduction presentation

Similarities

Both pyrefly and ty are written in Rust, both are incremental (albeit implemented slightly differently: see details below), and both are powered under the hood by Ruff for AST parsing. Also, both have first-class support for command-line type checking and LSP/IDE integration.

However, other than the fact that they are both fast Python type checkers, that’s where the similarities end. In my opinion, there are four categories in which these two tools differ: in Speed, Goals, Incrementalization, and Capabilities. That’s what we’ll explore today.

Speed

Speed seemed like one of the main focuses of Pyrefly, being mentioned multiple times during the intro presentation. According to the team, it’s 35x faster than Pyre and 14x faster than Mypy/Pyright, with support of up to 1.8 million lines of code per second. Fast enough to “type check on every keystroke”.

In comparison, speed was also one of the main design goals for ty, but it felt like less of a focus during the introduction. The only claim was “1-2x faster than current generation type checkers”. Naturally, I wanted to test performance out for myself.

Benchmarking - PyTorch

For the first test, I cloned and checked out the latest release of PyTorch (v2.7.0) and compared type check times between pyrefly, ty, mypy, and pyright on a MacBook M4. Two tests were run, one on the entire pytorch repository and another on just the torch subdirectory:

PyTorch on the latest mypy is not supported. Using mypy 1.14.0 instead.

pytorch benchmarks
Commands Run
  • pyrefly: hyperfine --warmup 3 --runs 5 --ignore-failure 'pyrefly check'
  • ty: hyperfine --warmup 3 --runs 5 --ignore-failure 'ty check'
  • mypy: hyperfine --warmup 3 --runs 5 --ignore-failure 'mypy --cache-dir=/dev/null .'
  • pyright: hyperfine --warmup 3 --runs 5 --ignore-failure 'pyright'
Raw Data
ty
  Time (mean ± σ):      4.039 s ±  0.234 s    [User: 19.135 s, System: 3.850 s]
  Range (min … max):    3.888 s …  4.455 s    5 runs

pyrefly
  Time (mean ± σ):     13.029 s ±  0.136 s    [User: 60.489 s, System: 6.297 s]
  Range (min … max):   12.916 s … 13.184 s    5 run

mypy
  dnf

pyright
  Time (mean ± σ):     262.742 s ±  4.948 s    [User: 472.717 s, System: 18.898 s]
  Range (min … max):   259.173 s … 270.617 s    5 runs
pytorch torch benchmarks
Commands Run
  • pyrefly: hyperfine --warmup 3 --runs 10 --ignore-failure 'pyrefly check torch'
  • ty: hyperfine --warmup 3 --runs 10 --ignore-failure 'ty check torch'
  • mypy: hyperfine --warmup 3 --runs 10 --ignore-failure 'mypy --cache-dir=/dev/null torch'
  • pyright: hyperfine --warmup 3 --runs 10 --ignore-failure 'pyright torch'
Raw Data
ty
  Time (mean ± σ):      1.123 s ±  0.022 s    [User: 6.460 s, System: 0.604 s]
  Range (min … max):    1.082 s …  1.167 s    10 runs

pyrefly
  Time (mean ± σ):      2.347 s ±  0.261 s    [User: 15.876 s, System: 0.919 s]
  Range (min … max):    2.089 s …  2.988 s    10 runs
  
mypy
  Time (mean ± σ):     24.731 s ±  0.238 s    [User: 24.144 s, System: 0.519 s]
  Range (min … max):   24.299 s … 25.016 s    10 runs
  
pyright
  Time (mean ± σ):     48.096 s ±  1.705 s    [User: 68.526 s, System: 4.072 s]
  Range (min … max):   46.037 s … 50.488 s    10 runs

Out of the gate, we see that for both pytorch and just torch, ty is about 2-3x faster compared to pyrefly, and both are over 10x-20x faster than mypy and pyright.

One interesting note is that pyrefly detected more source files than ty: about 8600 for pyrefly and 6500 for ty on pytorch (I’m not sure where the discrepancy comes from).

It’s also important to remember that both pyrefly and ty are still in early alpha, and are not feature complete. This may skew the results!

Benchmarking - Django

Next, I ran the same benchmark on Django version 5.2.1.

Note: mypy errored out during this test.

django benchmarks
Commands Run
  • pyrefly: hyperfine --warmup 3 --runs 10 --ignore-failure 'pyrefly check'
  • ty: hyperfine --warmup 3 --runs 10 --ignore-failure 'ty check'
  • mypy: hyperfine --warmup 3 --runs 10 --ignore-failure 'mypy --cache-dir=/dev/null .'
  • pyright: hyperfine --warmup 3 --runs 10 --ignore-failure 'pyright'
Raw Data
ty
  Time (mean ± σ):     578.2 ms ±  27.8 ms    [User: 2980.4 ms, System: 546.9 ms]
  Range (min … max):   557.1 ms … 634.0 ms    10 runs

pyrefly
  Time (mean ± σ):     910.7 ms ±  26.2 ms    [User: 3033.0 ms, System: 565.0 ms]
  Range (min … max):   879.6 ms … 963.1 ms    10 runs
  
mypy
  dnf
  
pyright
  Time (mean ± σ):     16.324 s ±  0.476 s    [User: 24.477 s, System: 1.682 s]
  Range (min … max):   15.845 s … 17.182 s    10 runs

We see the same results across the board with ty being the fastest (2,900 files at 0.6s), pyrefly as a close second (3,200 files at 0.9s), and pyright being the slowest (16s).

Benchmarking - Mypy

Finally, I ran the benchmark on the mypy repo itself (more specifically the mypyc subdirectory). Similar results here.

mypy mypyc benchmarks
Commands Run
  • pyrefly: hyperfine --warmup 3 --runs 20 --ignore-failure 'pyrefly check mypyc'
  • ty: hyperfine --warmup 3 --runs 20 --ignore-failure 'ty check mypyc'
  • mypy: hyperfine --warmup 3 --runs 20 --ignore-failure 'mypy --cache-dir=/dev/null mypyc'
  • pyright: hyperfine --warmup 3 --runs 20 --ignore-failure 'pyright mypyc'
Raw Data
ty
  Time (mean ± σ):      74.2 ms ±   1.5 ms    [User: 403.4 ms, System: 41.6 ms]
  Range (min … max):    71.9 ms …  78.1 ms    20 runs

pyrefly
  Time (mean ± σ):     136.0 ms ±   1.5 ms    [User: 728.3 ms, System: 54.5 ms]
  Range (min … max):   133.4 ms … 139.6 ms    20 runs
  
mypy
  Time (mean ± σ):      3.544 s ±  0.099 s    [User: 3.442 s, System: 0.093 s]
  Range (min … max):    3.420 s …  3.774 s    20 runs
  
pyright
  Time (mean ± σ):      2.852 s ±  0.103 s    [User: 4.315 s, System: 0.227 s]
  Range (min … max):    2.704 s …  3.105 s    20 runs

Goals

The primary goals between pyrefly and ty are where I feel the main difference lies. Pyrefly tries to be as aggressive as possible when typing — inferring as much as possible so that even code with absolutely no explicit types can have some amount of typing guarantees.

ty, on the other hand, follows a different mantra: the gradual guarantee. The principal idea is that in a well-typed program, removing a type annotation should not cause a type error. In other words: you shouldn’t need to add new types to working code to resolve type errors.

the gradual guarantee slide from ty presentation
the gradual guarantee slide from ty presentation

This is shown in this example here:

class MyClass:
 attr = None

foo = MyClass()

# ➖ pyrefly | revealed type: None
# ✅ ty. | Revealed type: `Unknown | None`
# ➖ mypy. | Revealed type is "None"
# ➖ pyright | Type of "foo.attr" is "None"
reveal_type(foo.attr)

# ➖ pyrefly | ERROR: Literal[1] is not assignable to attribute attr with type None
# ✅ ty. | < No Error >
# ➖ mypy. | ERROR: Incompatible types in assignment (expression has type "int", variable has type "None")
# ➖ pyright | ERROR: Cannot assign to attribute "attr" for class "MyClass"
foo.attr = 1

In this example, pyrefly, mypy, and pyright eagerly type foo.attr as None and throw an exception when assigned as 1 — whereas ty understands that foo.attr = 1 should not actually cause a syntax error, and instead types foo.attr as Unknown | None to allow the assignment. (Unknown is a new type added by ty to denote between an explicit Any versus an “unknown” Any.)

As a consequence, this also means that pyrefly can catch some errors that other type checkers cannot. Take this example here:

my_list = [1, "b", None]
val = my_list.pop(1)

# ✅ pyrefly | revealed type: int | str | None
# ➖ ty. | Revealed type: `Unknown`
# ➖ mypy. | Revealed type is "builtins.object"
# ➖ pyright | Type of "val" is "Unknown"
reveal_type(val)

# ✅ pyrefly | ERROR: `*` is not supported between `None` and `Literal[2]`
# ➖ ty. | < No Error >
# ➖ mypy. | ERROR: Unsupported operand types for * ("object" and "int")
# ➖ pyright | < No Error >
new_val = val * 2

mypy technically did throw an error, but for the wrong reasons. For example, setting my_list = [1, "b"] would fix the program, but mypy still reports a mismatch between object and int.

Pyrefly implicitly types val as int | str | None, even though neither val nor my_list was explicitly typed. This correctly catches the val * 2 error below.

This is just one of many examples, as more will be shown later in the Capabilities section.

Incrementalism

Both pyrefly and ty claim to be incremental — meaning that changing one file would only cause a re-parse on the affected area, and not the entire program. Pyrefly uses a custom incremental engine behind the scenes for its type checker. In constrast, ty uses Salsa, the same incremental framework that powers Rust Analyzer.

Interestingly, what that means is that ty has fine-grained incrementalization: changing a single function would only cause a re-parse on that function itself (and nothing else), and its dependent functions, and so on. Pyrefly, on the other hand, uses module-level incrementation: changing a single function would cause a re-parse on the entire file/module, and its dependent files/modules, etc.

The reason why pyrefly chose module-level over fine-grained (at least from what I’ve gathered) is that module-level incrementalization is already fast enough in Rust, and fine-grained incrementalization results in a much more complex and harder to maintain codebase with minimal performance improvements.

Capabilities

Both the pyrefly and ty teams make it VERY CLEAR that they are still unfinished and in early alpha, with known issues, bugs, and incomplete features. Despite that, I think it’s cool to go over what each supports as of now as it showcases what each team has focused on and determined to be important so far for their next-generation Python type checkers.

Implicit Type Inference

Implicit type inference is one of the showcase features of pyrefly. For example, here is a simple case of inferring return types:

def foo(imp: Any):
 return str(imp)

a = foo(123)

# ✅ pyrefly | revealed type: str
# ➖ ty. | Revealed type: `Unknown`
# ➖ mypy. | Revealed type is "Any"
# ✅ pyright | Type of "a" is "str"
reveal_type(a)

# ✅ pyrefly | ERROR: `+` is not supported between `str` and `Literal[1]`
# ➖ ty. | < No Error >
# ➖ mypy. | < No Error >
# ✅ pyright | ERROR: Operator "+" not supported for types "str" and "Literal[1]"
a + 1

Here’s another example with inferring types of more complex collection objects (in this case, a dict):

from typing import reveal_type

my_dict = {
 key: value * 2
 for key, value in {"apple": 2, "banana": 3, "cherry": 1}.items()
 if value > 1
}

# ✅ pyrefly | revealed type: dict[str, int]
# ➖ ty. | Revealed type: `@Todo`
# ✅ mypy. | Revealed type is "builtins.dict[builtins.str, builtins.int]"
# ✅ pyright | Type of "my_dict" is "dict[str, int]"
reveal_type(my_dict)

But, here is where the “gradual guarantee” of ty comes in. Take this example here:

my_list = [1, 2, 3]

# ✅ pyrefly | revealed type: list[int]
# ➖ ty. | Revealed type: `list[Unknown]`
# ✅ mypy. | Revealed type is "builtins.list[builtins.int]"
# ✅ pyright | Type of "my_list" is "list[int]"
reveal_type(my_list)

# ➖ pyrefly | ERROR: Argument `Literal['foo']` is not assignable to parameter with type `int` in function `list.append`
# ✅ ty. | < No Error >
# ➖ mypy. | ERROR: Argument 1 to "append" of "list" has incompatible type "str"; expected "int" 
# ➖ pyright | ERROR: Argument of type "Literal['foo']" cannot be assigned to parameter "object" of type "int" in function "append"
my_list.append("foo")

pyrefly, mypy, and pyright all assume that my_list.append("foo") is a typing error, even though it is technically allowed (Python collections can have multiple types of objects!) If this is the intended behavior, ty is the only checker that implicitly allows this without requiring additional explicit typing on my_list.

Generics

Another thing the pyrefly team mentioned during their talk was that while redesigning pyrefly from the ground up, they focused on the “hard problems first”. This means that a lot of the architecture around pyrefly was built around things like generics, overloads, and wildcard imports.

For example, here are some examples where pyrefly and ty both have correct generic resolution:

# === Simple Case ===
class Box[T]:
 def __init__(self, val: T) -> None:
 self.val = val

b: Box[int] = Box(42)

# ✅ pyrefly | revealed type: int
# ✅ ty. | Revealed type: `Unknown | int`
# ✅ mypy. | Revealed type is "builtins.int"
# ✅ pyright | Type of "b.val" is "int"
reveal_type(b.val)

# ✅ pyrefly | ERROR: Argument `Literal[100]` is not assignable to parameter `val` with type `str` in function `Box.__init__`
# ✅ ty. | ERROR: Object of type `Box[int]` is not assignable to `Box[str]`
# ✅ mypy. | ERROR: Argument 1 to "Box" has incompatible type "int"; expected "str"
# ✅ pyright | ERROR: Type "Box[int]" is not assignable to declared type "Box[str]"
b2: Box[str] = Box(100)

# === Bounded Types with Attribute ===
class A:
 x: int | str

def f[T: A](x: T) -> T:
 # ✅ pyrefly | revealed type: int | str
 # ✅ ty. | Revealed type: `int | str`
 # ✅ mypy. | Revealed type is "Union[builtins.int, builtins.str]"
 # ✅ pyright | Type of "x.x" is "int | str"
 reveal_type(x.x)
 return x

Whereas here are some examples where pyrefly has better generic resolution compared to ty:

from typing import Callable, TypeVar, assert_type, reveal_type
 # === Generic Class Without Explicit Type Param ===

class C[T]:
 x: T

c: C[int] = C()

# ✅ pyrefly | revealed type: C[int]
# ➖ ty. | `C[Unknown]`
# ✅ pypy. | Revealed type is "__main__.C[builtins.int]"
# ✅ pyright | Type of "c" is "C[int]"
reveal_type(c)

# ✅ pyrefly | revealed type: int
# ➖ ty. | Revealed type: `Unknown`
# ✅ pypy. | Revealed type is "builtins.int"
# ✅ pyright | Type of "c.x" is "int"
reveal_type(c.x)

# === Bounded Types with Callable Attribute ===

def func[T: Callable[[int], int]](a: T, b: int) -> T:
 # ✅ pyrefly | revealed type: int
 # ➖ ty. | ERROR: <Error: Object of type `T` is not callable>
 # ✅ pypy. | Revealed type is "builtins.int"
 # ✅ pyright | Type of "a(b)" is "int"
 reveal_type(a(b))
 return a

Interestingly enough, both pyrefly and ty seem to struggle with resolving covariance and contravariance relationships. Example here:

from __future__ import annotations

class A[X]:
 def f(self) -> B[X]:
 ...

class B[Y]:
 def h(self) -> B[Y]:
 ...

def cast_a(a: A[bool]) -> A[int]:
 # ➖ pyrefly | ERROR: Return type does not match returned value: expected `A[int]`, found `A[bool]`
 # ➖ ty. | ERROR: Returned type `A[bool]` is not assignable to declared return type `A[int]`
 # ✅ mypy. | < No Error >
 # ✅ pyright | < No Error >
 return a # Allowed

One explicit feature of ty is to have clear and concise error messages.

For example, here is a simple example of a function call with mismatched types:

ty-error-message.png

Compared to pyrefly, mypy, and pyright:

pyrefly-error-message.png

mypy-error-message.png

pyright-error-message.png

Here is another example with mismatched return types:

ty-error-message-2.png

In my opinion, much cleaner! It’s exciting to see new and improved error messages coming to Python.

Intersection and Negation Types

Finally, one really cool feature the Astral team showed off was support for intersection and negation types — which they claim is the only Python type checker to implement. To illustrate this, take a look at this example:

class WithX:
 x: int

@final
class Other:
 pass

def foo(obj: WithX | Other):
 if hasattr(obj, "x"):
 # ➖ pyrefly | revealed type: Other | WithX
 # ✅ ty. | Revealed type: `WithX`
 # ➖ mypy. | Revealed type is "Union[__main__.WithX, __main__.Other]"
 # ➖ pyright | Type of "obj" is "WithX | Other"
 reveal_type(obj)

@final is a new feature in Python 3.12 that prevents a class from being subclassed. This is important for the type checker to know that Other cannot be subclassed with x in the future.

Given the constraints that obj is either WithX or final type Other, and obj has to have attribute x, the only resolvable type for obj at reveal_type(obj) is WithX. Breaking down what happens behind the scenes:

(WithX | Other) & <Protocol with members 'x'>
=> (WithX & <Protocol with members 'x'> | (Other & <Protocol with members 'x'>)
=> WithX | Never
=> WithX

Take a look at another example here:

class MyClass:
 ...

class MySubclass(MyClass):
 ...

def bar(obj: MyClass):
 if not isinstance(obj, MySubclass):
 # ➖ pyrefly | revealed type: MyClass
 # ✅ ty. | Revealed type: `MyClass & ~MySubclass`
 # ➖ mypy. | Revealed type is "__main__.MyClass"
 # ➖ pyright | Type of "obj" is "MyClass"
 reveal_type(obj)

ty is the only type checker to resolve obj at reveal_type(obj) to MyClass & ~MySubclass. This means that ty introduces new paradigms to Python types:

intersections and negations! Neat!

However, this is still in early alpha! For example, this case here:

def bar(obj: HasFoo):
 if not hasattr(obj, "bar"):
 reveal_type(obj)
 reveal_type(obj.foo)

reveal_type(obj) has the correct type of HasFoo & ~<Protocol with members 'bar'>, but reveal_type(obj.foo) resolves to @Todo even though obj.foo should be resolvable to the function foo given the constraints.

As one final fun party trick, here is ty using intersection and negation types to “solve” diophantine equations:

# Simply provide a list of all natural numbers here ...
type Nat = Literal[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]

def pythagorean_triples(a: Nat, b: Nat, c: Nat):
 reveal_type(a**2 + b**2 == c**2)
 # reveals 'bool': solutions exist (3² + 4² == 5²)

def fermats_last_theorem(a: Nat, b: Nat, c: Nat):
 reveal_type(a**3 + b**3 == c**3)
 # reveals 'Literal[False]': no solutions!

def catalan_conjecture(a: Nat, b: Nat):
 reveal_type(a**2 - b**3 == 1)
 # reveals 'bool': solutions exist (3² - 2³ == 1)

Final Thoughts

Overall, it’s exciting to have two new faster type checkers in the Python ecosystem! As of right now, pyrefly and ty seem to follow two different systematic goals. Ty takes a gradual approach to typing - given a program that (theoretically) runs flawlessly, running a type checker should not raise any new typing errors - and if it does, it probably indicates an actual flaw somewhere in the code. Pyrefly takes a different approach, one that is similar to many state-of-the-art Python type checkers today - infer as many types as possible, at the cost of possibly introducing typing errors where it shouldn’t.

As mentioned multiple times, both pyrefly and ty are in early alpha. I strongly suspect the features and capabilities of both tools will converge as time goes on, but nevertheless, it is still cool to see where the two type checkers are at now and how they might come into play in different scenarios sometime in the future.

Go try these out for yourself now!

You can try out pyrefly over at pyrefly.org/sandbox, and ty over at play.ty.dev. Both also have their respective pip install commands and plugins for your editor (VSCode, Cursor, etc).

In the meantime, I heard rumors that Google is planning on open-sourcing their own Go-based Python type checker, so it’ll be very cool to check that out once it comes out 👀 …

Appendix

I just wanted to call out that ty’s tests are written in… MARKDOWN! How cool is that?

https://github.com/astral-sh/ruff/tree/main/crates/ty_python_semantic/resources/mdtest

Thanks for reading!

If you notice any mistakes, comments, or feedback, please let me know!

Contact: blog@edward-li.com


Read the original article

Comments

  • By dcreager 2025-05-2717:284 reply

    [ty developer here]

    We are happy with the attention that ty is starting to receive, but it's important to call out that both ty and pyrefly are still incomplete! (OP mentions this, but it's worth emphasizing again here.)

    There are definitely examples cropping up that hit features that are not yet implemented. So when you encounter something where you think what we're doing is daft, please recognize that we might have just not gotten around to that yet. Python is a big language!

    • By flakes 2025-05-2718:585 reply

      Really loving those markdown style tests. I think it's a really fantastic idea that allows the tests to easily act as documentation too.

      Can you explain how you came up with this solution? Rust docs code-examples inspired?

    • By zem 2025-05-2717:591 reply

      surfacing revealed types as `@TODO` made me laugh, but thinking about it it's actually a pretty neat touch!

      • By dcreager 2025-05-2720:18

        It really helps in our mdtests, because then we can assert that not-implemented things are currently wrong but for the right reasons!

    • By echelon 2025-05-2718:244 reply

      Totally orthogonal question, but since you're deep in that side of Rust dev -

      The subject of a "scripting language for Rust" has come up a few times [1]. A language that fits nicely with the syntax of Rust, can compile right alongside rust, can natively import Rust types, but can compile/run/hot reload quickly.

      Do you know of anyone in your network working on that?

      And modulus the syntax piece, do you think Python could ever fill that gap?

      [1] https://news.ycombinator.com/item?id=44050222

      • By dcreager 2025-05-2718:57

        I don't know that I'd want the scripting language to be compiled, for reasons that are outside the scope of this reply. So removing that constraint, the coolest thing I've seen in this space recently is kyren's Piccolo:

        https://kyju.org/blog/piccolo-a-stackless-lua-interpreter/

      • By mdaniel 2025-05-2718:29

        > And modulus the syntax piece, do you think Python could ever fill that gap?

        I would never ever want a full fledged programming language to build type checking plugins, and doubly so in cases where one expects the tool to run in a read-write context

        I am not saying that Skylark is the solution, but it's sandboxed mental model aligns with what I'd want for such a solution

        I get the impression the wasm-adjacent libraries could also help this due to the WASI boundary already limiting what mutations it is allowed

      • By tadfisher 2025-05-2720:281 reply

        There's Gluon, which doesn't share Rust's syntax but does have a Hindley-Milner-based type system and embeds pretty seamlessly in a Rust program.

        https://github.com/gluon-lang/gluon

        • By mdaniel 2025-05-280:19

          Please no

            let { (*>), (<*), wrap } = import! std.applicative

      • By julienfr112 2025-05-2718:412 reply

        Most of the time, you want the type to be dynamic in a scripting langage, as you don't want to expose the types to the user. With this in mind, rhai and rune are pretty good. On the python front, there was also the pyoxidizer thing, put it seems dead.

        • By echelon 2025-05-2721:45

          Not necessarily!

          These are the strong vs weak, static vs dynamic axes.

          You probably want strong, but dynamic typing. eg., a function explicitly accepts only a string and won't accept or convert a float into a string implicitly or magically.

          You're free to bind or rebind variables to anything at any time, but using them in the wrong way leads to type errors.

          JavaScript has weak dynamic typing.

          Python has strong dynamic typing (though since types aren't annotated in function definitions, you don't always see it until a type is used in the wrong way at the leaves of the AST / call tree).

          Ruby has strong dynamic typing, but Rails uses method_missing and monkey patching to make it weaker though lots of implicit type coercions.

          C and C++ have weak static typing. You frequently deal with unstructured memory and pointers, casting, and implicit coercions.

          Java and Rust have strong static typing.

        • By tadfisher 2025-05-2720:24

          If the language has types at all, they're exposed to the user, even if the time of exposure is a runtime failure. I suspect you want inferred types, which can be had in statically-typed languages.

    • By davedx 2025-05-2719:33

      I am very interested in both of these. Coming from the TypeScript world I'm really interested in the different directions (type inference or not, intersections and type narrowing...). As a Python developer I'm wearily resigned to there being 4+ python type checkers out there, all of which behave differently. How very python...

      Following these projects with great interest though. At the end of the day, a good type checker should let us write code faster and more reliably, which I feel isn't yet the case with the current state of the art of type checking for python.

      Good luck with the project!

  • By suspended_state 2025-05-2718:266 reply

    I am not well versed in python programming, this is just my opinion as an outsider.

    For anyone interested in using these tools, I suggest reading the following:

    https://www.reddit.com/r/Python/comments/10zdidm/why_type_hi...

    That post should probably be taken lightly, but I think that the goal there is to understand that even with the best typing tools, you will have troubles, unless you start by establishing good practices.

    For example, Django is large code base, and if you look at it, you will observe that the code is consistent in which features of python are used and how; this project passes the stricter type checking test without troubles. Likewise, Meta certainly has a very large code base (why develop a type checker otherwise?), and they must have figured out that they cannot let their programmers write code however they like; I guess their type checker is the stricter one for that reason.

    Python, AFAIK, has many features, a very permissive runtime, and perhaps (not unlike C++) only some limited subset should be used at any time to ensure that the code is manageable. Unfortunately, that subset is probably different depending on who you ask, and what you aim to do.

    (Interestingly, the Reddit post somehow reminded me of the hurdles Rust people have getting the Linux kernel guys to accept their practice: C has a much simpler and carefree type system, but Rust being much more strict rubs those C guys the wrong way).

    • By mjr00 2025-05-2720:372 reply

      The top comment in that post shuts down the whole nonsense pretty quickly and firmly:

      > If you have a super-generic function like that and type hinting enforced, you just use Any and don't care about it.

      It's a stupid example, but even within the context of a `slow_add` function in a library: maybe the author originally never even thought people would pass in non-numeric values, so in the next version update instead of a hardcoded `time.sleep(0.1)` they decide to `time.sleep(a / b)`. Oops, now it crashes for users who passed in strings or tuples! If only there were a way to declare that the function is only intended to work with numeric values, instead of forcing yourself to provide backwards compatibility for users who used that function in unexpected ways that happened to work.

      IMO: for Python meant to run non-interactively with any sort of uptime guarantees, type checking is a no-brainer. You're actively making a mistake if you choose to not add type checking.

      • By notatallshaw 2025-05-2721:293 reply

        As the author of that post, I'd like to point out the example was meant to be stupid.

        The purpose was to show different ideologies and expectations on the same code don't work, such as strict backwards compatibilities, duck typing, and strictly following linting or type hinting rules (due to some arbitrary enforcement). Although re-reading it now I wish I'd spent more than an evening working on it, it's full of issues and not very polished.

        > If you have a super-generic function like that and type hinting enforced, you just use Any and don't care about it.

        Following the general stupidness of the post: they are now unable to do that because a security consultant said they have to enable and can not break RUFF rule ANN401: https://docs.astral.sh/ruff/rules/any-type/

        • By mjr00 2025-05-2721:422 reply

          > Following the general stupidness of the post: they are now unable to do that because a security consultant said they have to enable and can not break RUFF rule ANN401: https://docs.astral.sh/ruff/rules/any-type/

          Okay, then your function which is extremely generic and needs to support 25 different use cases needs to have an insane type definition which covers all 25 use cases.

          This isn't an indictment of the type system, this is an indictment of bad code. Don't write functions that support hundreds of input data types, most of which are unintended. Type systems help you avoid this, by the way.

          • By notatallshaw 2025-05-2812:403 reply

            > Don't write functions that support hundreds of input data types

            But by it's nature, duck typing supports an unbounded number of input types and is what Python was built on.

            You've already decided duck typing is wrong and strict type adherence is correct, which is fine, but that doesn't fit the vast history of Python code, or in fact many of the core Python libraries.

            • By mjr00 2025-05-2814:451 reply

              > But by it's nature, duck typing supports an unbounded number of input types and is what Python was built on.

              You're trying to shove a square peg into a round hole. It's not about right or wrong. Either you want your function to operate on any type, attempt to add the two values (or perform any operation which may or may not be supported, i.e. duck typing), and throw an runtime error if it doesn't work--in which case you can leave it untyped or use `Any`--or you want stronger type safety guarantees so you can validate before runtime that nobody is calling your method with incorrect arguments, in which case you have to represent the types which you accept somehow.

              If you want to have a method that's fully duck typed, you're supposed to use `Any`. That's exactly why it exists. Inventing contrived scenarios about how you can't use `Any` is missing the point. It's like complaining C doesn't work if you're not allowed to use pointers.

              You're right that historically Python code was written with duck typing in mind, but now even highly flexible libraries like Pandas have type definition support. The ecosystem is way different from even 5-6 years ago, I can't think of any well-known libraries which don't have good typing support by now.

              • By notatallshaw 2025-05-2915:39

                > You're trying to shove a square peg into a round hole

                Yeah, that's literally the point of my Reddit post

            • By jononor 2025-05-297:48

              Duck typing is great, for example to support a range of numeric inputs - say fixed-precison integers, floats, dynamic precision integers, and numpy arrays, pandas series, tensorflow/pytorch tensor. That duck typing can support functions with unbounded types, dos not mean that it is necessary, not generally sensible, for a particular function to support unbounded types.

            • By sevensor 2025-05-2815:56

              Duck typing is great and Python’s type system has powerful support for it. You can for instance restrict a function to only objects with a frobnicate() method, without in any way constraining yourself on which implementation you accept. Type checking plus duck typing is very precise and powerful, and it helps me sleep at night.

        • By ramraj07 2025-05-2812:321 reply

          Stupid is okay. _Nonsense_ is not. Your example was nonsense, it was absurd. The moment I saw the first example I was like, this should be add_ints and should only take ints.

          Imagine I say "the human body is dumb! Here's an example: if I stab myself, it bleeds!" Like is that stupid or absurd?

          • By notatallshaw 2025-05-2812:49

            And yet, some Python users insistent on type hinting very dynamic Python code while trying to keep how dynamic it is.

        • By the_af 2025-05-282:261 reply

          But there was a conceivable way (maybe not in Python) to make a `slow_add` function very generic, yet only be defined over structures where any conceivable `+` operation is defined.

          You just have to say the type implements Semigroup.

          Yes, this would work if the arguments are lists, or integers, or strings. And it won't pass the typecheck for arguments that are not Semigroups.

          It may not work with Python, but only because it's designers weren't initially interested in typechecking.

          • By sevensor 2025-05-2819:191 reply

            Challenge accepted.

                from dataclasses import dataclass
                from typing import Protocol, Self, TypeVar
            
                class Semigroup(Protocol):
                    def __add__(self, other: Self) -> Self:
                        ...
            
                T = TypeVar("T", bound=Semigroup)
                def join_stuff(first: T, *rest: T) -> T:
                    accum = first
                    for x in rest:
                        accum += x
                    return accum
            
                @dataclass
                class C:
                    x: int
            
                @dataclass
                class D:
                    x: int
                    def __add__(self, other: Self) -> Self:
                        return type(self)(self.x + other.x)
            
                @dataclass
                class E:
                    x: int
                    def __add__(self, other: Self) -> Self:
                        return type(self)(self.x + other.x)
            
                _: type[Semigroup] = D
                _ = E
            
                def doit() -> None:
                    print(join_stuff(1,2,3))
                    print(join_stuff((1,), tuple(), (2,)))
                    print(join_stuff("a", "b", "c"))
                    print(join_stuff(D(1), D(2)))
                    print(join_stuff(D(1), 3))
                    print(D(1) + 3) # caught by mypy
                    print(D(1) + E(3)) # caught by mypy
                    print(join_stuff(1,2,"a")) # Not caught by mypy
                    print(join_stuff(C(1), C(2))) # caught by mypy
                doit()
            
            
            
            Now, this doesn't quite work to my satisfaction. Mypy lets you freely mix and match values of incompatible types, and I don't know how to fix that. Basically, if you directly try to add a D and an int, mypy will yell at you, but there's no way I've found to insist that the arguments to join_stuff, in addition to being Semigroups, are all of the compatible types. It looks like mypy is checking join_stuff as if Semigroup were a concrete class, so once you're inside join_stuff, the actual types of the arguments become irrelevant.

            However, it will correctly tell you that it can't accept arguments that don't define addition at all, and that's better than nothing.

            • By the_af 2025-05-2820:241 reply

              Pretty cool that you got this far though!

              I think at this point one starts to fight against Python, which wasn't designed with this in mind. But cool nonetheless.

              • By sevensor 2025-05-290:01

                Thanks! My approach is to stop once it starts to hurt, and figure out what I should expect the type checker to miss. The type system and I are both getting better at it as time goes by. It’s not perfect, but it’s way better than not having it.

      • By lexicality 2025-05-2813:32

        One thing that post does do though is very clearly highlight the difference between Python's type system and say ... TypeScript's.

        TypeScript's goal is to take a language with an unhinged duck type system that allows people to do terrible things and then allow you to codify and lock in all of those behaviours exactly as they're used.

        Mypy (and since it was written by GVM and codified in the stdlib by extension Python and all other typecheckers)'s goal is to take a language with an unhinged duck type system that allows people to do terrible things and then pretend that isn't the case and enforce strict academic rules and behaviours that don't particularly care about how real people write code and interact with libraries.

        If you include type hints from the very beginning than you are forced to use the very limited subset of behaviours that mypy allow you to codify and everything will be "fine".

        If you try to add type hints to a mature project, you will scream with frustration as you discover how many parts of the codebase literally cannot be represented in the extremely limited type system.

    • By mhh__ 2025-05-2719:126 reply

      At this point I'm fairly convinced that the effort one would spend trying to typecheck a python program is better spent migrating away from python into a language that has a proper type system, then using interop so you can still have the bits/people that need python be in python.

      Obviously that isn't always possible but you can spend far too long trying to make python work.

      • By bjackman 2025-05-287:00

        I think you're forgetting how easy type annotation is.

        I occasionally spend like 2h working on some old python code. I will spend say 15 minutes of that time adding type annotations (sometimes requires some trivial refactoring). This has an enormous ROI, the cost is so low and the benefit is so immediate.

        In these cases migrating code to a proper language and figuring out interop is not on my radar, it would be insane. So having the option to get some best-effort type safety is absolutely fantastic.

        I can definitely see your point, it's a useful analysis for projects under heavy development. But if you have a big Python codebase that basically just works and only sees incremental changes, adding type annotations is a great strategy.

      • By ramraj07 2025-05-2812:38

        If you're supposedly good at software and you spent too long trying to make python work consider the possibility that you're not good at software?

        Python has flaws and big ones at that, but there's a reason it's popular. Especially with tools like pydantic and fastapi and uv (and streamlit) you can do insane things in hours what would take weeks and months before. Not to mention how good AI is at generating code in these frameworks. I especially like typing using pydantic, any method is now able to dump and load data from files and dbs and you get extremely terse validated code. Modern IDEs also make quick work of extracting value even from partially typed code. I'd suggest you just open your mind up to imperfect things and give them a shot.

      • By J_Shelby_J 2025-05-2721:331 reply

        Six month into learning to build a modern python app, with linters, type systems, tests, venvs, package managers, etc… I realized that the supposed difficulty of rust is drastically less than coming to speed and then keeping up with the python “at scale” ecosystem.

        • By mdaniel 2025-05-280:22

          My strong suspicion is that such a story depends a great deal upon the personalities of the developers much more than any {tool chaos + type chaos} --- {new syntax + lifespan annotations} spectrum

      • By rtpg 2025-05-288:19

        I don't understand this point at all. I've worked on Django codebases which have a huge set of typing problems... and while it's not 100% I get a lot of value out of type checking.

        You annotate enough functions and you get a really good linter out of it!

      • By davedx 2025-05-2719:452 reply

        Unfortunately with us being in the middle of the AI hype cycle, everyone and their dog is currently busy migrating to python.

        • By hu3 2025-05-2719:552 reply

          I don't see why AI hype means more Python code.

          State of the art AI models are all closed source and accessible through an API anyways. APIs that any language can easily access.

          AAa for AI model development in itself, yes it's mostly Pyython, but niche.

          • By mzl 2025-05-288:241 reply

            I think you are underestimating the massive amounts of Python code that is built around these things. Also, a lot of businesses are not really interested in using an API for an LLM, instead they will modify and fine-tune their own models and deploy in their own data-centers (virtual or physical), and that means even more Python code.

            Sure, a system that only relies on token factory LLM APIs can be written in any language, but that is not the full width and breadth of the AI hype.

            • By hu3 2025-05-2814:311 reply

              > Also, a lot of businesses are not really interested in using an API for an LLM, instead they will modify and fine-tune their own models and deploy in their own data-centers

              You realize model training cost millions right? "a lot of businesses" doesn't pass sniff test here.

              I'm not even counting the large swaths of data required to train. And the expensive specialists.

              And then you'll have to retrain outdated models every so often.

              There's a reason that AI has only a handful of players delivering SoTA models and these players are all worth $5B+.

              • By mzl 2025-06-097:05

                SOTA LLM model training costs a lot, yes. But fine-tuning and training of smaller models is a lot cheaper.

                I've trained useful vision-models that delivered business value for industrial applications on a MacBook overnight.

          • By davedx 2025-05-286:26

            Because everyone and their dog thinks they need to be ready to develop their own models, in python.

            This is honestly a thing, at least in the startup world.

        • By adamors 2025-05-2720:00

          I’d be surprised if _anyone_ is migrating server code to Python because of AI.

      • By guappa 2025-05-288:06

        If you do that you need to compile, which means you can't just distribute a text file with your python program. You need a build infrastructure for every python version, every architecture and every OS.

        Have fun with that!

    • By kccqzy 2025-05-2719:132 reply

      > Python, AFAIK, has many features, a very permissive runtime, and perhaps (not unlike C++) only some limited subset should be used at any time to ensure that the code is manageable. Unfortunately, that subset is probably different depending on who you ask, and what you aim to do.

      I'll get started on the subset of Python that I personally do not wish to use in my own codebase: meta classes, descriptors, callable objects using __call__, object.__new__(cls), names trigger the name mangling rules, self.__dict__. In my opinion, all of the above features involve too much magic and hinder code comprehension.

      • By kstrauser 2025-05-2719:422 reply

        There's a time and a place for each of them:

        * Meta classes: You're writing Pydantic or an ORM.

        * Descriptors: You're writing Pydantic or an ORM.

        * Callable objects: I've used these for things like making validators you initialize with their parameters in one place, then pass them around so other functions can call them. I'd probably just use closures if at all possible now.

        * object.__new__: You're writing Pydantic or an ORM.

        * Name mangling: I'm fine with using _foo and __bar where appropriate. Those are nice. Don't ever, ever try to de-mangle them or I'll throw a stick at you.

        * self.__dict__: You're writing Pydantic or an ORM, although if you use this as shorthand for "doing things that need introspection", that's a useful skill and not deep wizardry.

        Basically, you won't need those things 99.99% of the time. If you think you do, you probably don't. If you're absolutely certain you do, you might. It's still good and important to understand what they are, though. Even if you never write them yourself, at some point you're going to want to figure out why some dependency isn't working the way you expected, and you'll need to read and know what it's doing.

        • By guappa 2025-05-288:27

          I never understood why pydantic reimplemented attrs, but doing it much slower, instead of just using attrs.

        • By kccqzy 2025-05-2720:083 reply

          > Basically, you won't need those things 99.99% of the time

          That's kind of my point. If you don't need a language feature 99.99% of the time perhaps it is better to cut it out from your language altogether. Well unless your language is striving to have the same reputation as C++. In Python's case here's a compromise: such features can only be used in a Python extension in C code, signifying their magic nature.

          • By orbital223 2025-05-2720:36

            I don't need my car's airbags 99.99% of the time.

          • By maleldil 2025-05-2722:29

            I think it's fine to have those if it makes API design and better. In my mind, there's "code you should write" and there's "code only libraries should write".

          • By mixmastamyk 2025-05-2720:38

            A lot of people want pydantics and orms.

      • By bbkane 2025-05-284:241 reply

        You should try Go!

        • By guappa 2025-05-288:28

          It should be banned by the geneva convention.

    • By fastasucan 2025-05-2720:451 reply

      Can you share a little bit about what makes you form opinions when you are not even using the language? I think its fascinating how especially discussions about typing makes people shake their fists against a language they don't even use - and like your post make up some contrived example.

      >I think that the goal there is to understand that even with the best typing tools, you will have troubles, unless you start by establishing good practices.

      Like - what makes you think that python developers doesn't understand stuff about Python, when they are actively using the language as opposed to you?

      • By suspended_state 2025-05-287:43

        Indeed, I'm not a regular Python practitioner. I had to use it from time to time because it's the language chosen by the tools I happened to use at that time, like Blender, or Django. In the former case, it wasn't very enjoyable (which says a lot about my skills in that area, or rather lack thereof), while in the latter case I found it quite likeable. So that's my background as far as python goes.

        I must admit that I largely prefer static typing, which is why I got interested in that article. It's true that trying to shoehorn this feature in the Python ecosystem is an uphill battle: there's a lot of good engineering skill spent on this.

        Perhaps there's a connection to make between this situation and an old theorem about incompleteness?

        https://copilot.microsoft.com/shares/2LpT2HFBa3m6jYxUhk9fW

        (was generated in quick mode, so you might want to double check).

    • By flanked-evergl 2025-05-2722:521 reply

      As someone who has been writing python for years the worst mistake I have ever seen people make is not add type hints and not using a type checker.

      • By stavros 2025-05-289:351 reply

        Also not creating custom, expressive Pydantic types and using nested dicts in places. Nested dicts suck, you never know what you're getting, and it's well worth the time converting them to classes.

        • By mejutoco 2025-05-2812:28

          TypedDicts or data classes are both a good idea.

    • By Groxx 2025-05-2721:01

      To try to tl;dr that rather long post:

      > When you add type hints to your library's arguments, you're going to be bitten by Hyrum's Law and you are not prepared to accurately type your full universe of users

      That's understandable. But they're making breaking changes, and those are just breaking change pains - it's almost exactly the same if they had instead done this:

          def slow_add(a, b):
              throw TypeError if !isinstance(a, int)
              ...
      
      but anyone looking at that would say "well yeah, that's a breaking change, of course people are going to complain".

      The only real difference here is that it's a developer-breaking change, not a runtime-breaking one, because Python does not enforce type hints at runtime. Existing code will run, but existing tools looking at the code will fail. That offers an easier workaround (just ignore it), but is otherwise just as interruptive to developers because the same code needs to change in the same ways.

      ---

      In contrast! Libraries can very frequently add types to their return values and it's immediately useful to their users. You're restricting your output to only the values that you already output - essentially by definition, only incorrect code will fail when you do this.

  • By senkora 2025-05-2716:055 reply

    > ty, on the other hand, follows a different mantra: the gradual guarantee. The principal idea is that in a well-typed program, removing a type annotation should not cause a type error. In other words: you shouldn’t need to add new types to working code to resolve type errors.

    The gradual guarantee that Ty offers is intriguing. I’m considering giving it a try based on that.

    With a language like Python with existing dynamic codebases, it seems like the right way to do gradual typing.

    • By rendaw 2025-05-2716:427 reply

      Gradual typing means that an implicit "any" (unknown type) anywhere in your code base is not an error or even a warning. Even in critical code you thought was fully typed. Where you mistakenly introduce a type bug and due to some syntax or inference limits the type checker unexpectedly loses the plot and tells you confidently "no problems in this file!"

      I get where they're coming from, but the endgame was a huge issue when I tried mypy - there was no way to actually guarantee that you were getting any protection from types. A way to assert "no graduality to this file, it's fully typed!" is critical, but gradual typing is not just about migrating but also about the crazy things you can do in dynamic languages and being terrified of false positives scaring away the people who didn't value static typing in the first place. Maybe calling it "soft" typing would be clearer.

      I think gradual typing is an anti-pattern at this point.

      • By dcreager 2025-05-2717:353 reply

        > Gradual typing means that an implicit "any" (unknown type) anywhere in your code base is not an error or even a warning. Even in critical code you thought was fully typed. Where you mistakenly introduce a type bug and due to some syntax or inference limits the type checker unexpectedly loses the plot and tells you confidently "no problems in this file!"

        This is a good point, and one that we are taking into account when developing ty.

        The benefit of the gradual guarantee is that it makes the onboarding process less fraught when you want to start (gradually) adding types to an untyped codebase. No one wants a wall of false positive errors when you first start invoking your type checker.

        The downside is exactly what you point out. For this, we want to leverage that ty is part of a suite of tools that we're developing. One goal in developing ty is to create the infrastructure that would let ruff support multi-file and type-aware linter rules. That's a bit hand-wavy atm, since we're still working out the details of how the two tools would work together.

        So we do want to provide more opinionated feedback about your code — for instance, highlighting when implicit `Any`s show up in an otherwise fully type-annotated function. But we view that as being a linter rule, which will likely be handled by ruff.

        • By genshii 2025-05-2718:13

          This makes sense to me and is exactly what TypeScript does. Implicit `any`s do not raise TypeScript errors (which, by definition, is expected), but obviously that means if there is an `any`, it's potentially unsafe. To deal with this, you can turn on `noImplicitAny` or strict mode (which 99% of projects probably have enabled anyway).

          Difference here that strict mode is a tsc option vs. having this kind of rule in the linter (ruff), but the end result is the same.

          Anyway, that was a long winded way of saying that ty or ruff definitely needs its own version of a "strict" mode for type checking. :)

        • By eternityforest 2025-05-2717:451 reply

          Could there ever be a flag to turn off the gradual guarantee and get stricter behavior?

        • By pydry 2025-05-2719:39

          You could give a score to different folders or files to indicate a level of "type certainty" and allow people to define failure thresholds.

      • By josevalim 2025-05-2718:532 reply

        > Gradual typing means that an implicit "any" (unknown type) anywhere in your code base is not an error or even a warning.

        That depends on the implementation of gradual typing. Elixir implements gradual set-theoretic types where dynamic types are a range of existing types and can be refined for typing violations. Here is a trivial example:

            def example(x) do
              {Integer.to_string(x), Atom.to_string(x)}
            end
        
        Since the function is untyped, `x` gets an initial value of `dynamic()`, but it still reports a typing violation because it first gets refined as `dynamic(integer())` which is then incompatible with the `atom()` type.

        We also introduced the concept of strong arrows, which allows dynamic and static parts of a codebase to interact without introducing runtime checks and remaining sound. More information here: https://elixir-lang.org/blog/2023/09/20/strong-arrows-gradua...

        • By HelloNurse 2025-05-2814:231 reply

          How is this function definition (or maybe just its parameter x) "untyped"? There is enough information to deduce that the type of parameter x is empty and the type of the function doesn't matter because there is an error.

          If the body of the function contained only the first or the second call, the verdict would have been that x is respectively an Integer or an Atom and the type of the function is the type of the contained expression.

          • By josevalim 2025-05-2821:06

            For us type inference is the same as type checking where all parameters are given the dynamic type. So even if you explicitly added a signature that said dynamic, we would still find a violation, where others would not. The point is that dynamic does not have to mean “anything goes”.

        • By _carljm 2025-05-2814:001 reply

          ty also implements gradual set-theoretic types, and can represent "ranged" dynamic types (as intersections or unions with Any/Unknown). We don't currently refine dynamic type based on all uses, as suggested here, though we've considered something very much like this for invariant generics.

          In your example, wouldn't `none()` be a type for `x` that satisfies both `Integer.to_string(x)` and `Atom.to_string(x)`? Or do you special-case `none()` and error if it occurs?

          • By josevalim 2025-05-2821:16

            Oh, that’s exciting to hear! I would love to exchange notes and I know one of the lead researchers of set theoretic types would love to learn more about your uses too. If that sounds fun to you, you can find me on Gmail (same username).

            In our case, we implement a bidirectional system where before applying x to Integer.to_string, we compute the domain of Integer.to_string (which is integer) and pass it up. If x is a dynamic type, then we refine it. So on the first call, x refines to `dynamic & integer`, then we apply it.

            The second refinement fails because it becomes none, so we discard it, but it means the application on Atom.to_string will fail anyway. So yes, we check for emptiness and discard none.

      • By mmoskal 2025-05-2716:52

        As mentioned in other comments - in TypeScript which follows this gradual typing there is a number of flags to disable it (gradually so to speak). No reason ty wouldn't do it.

      • By belmont_sup 2025-05-2717:071 reply

        Responding to your gradual typing anti-pattern bit: Agree that dynamic language behaviors can be extreme but it’s also easy to get into crazy type land. Putting aside a discussion of type systems, teams can always add runtime checks like pydantic to ensure your types match reality.

        Sorbet (Ruby typechecker) does this where it introduces a runtime checks on signatures.

        Similarly in ts, we have zod.

        • By MeetingsBrowser 2025-05-2717:313 reply

          > teams can always add runtime checks like pydantic to ensure your types match reality.

          That's the problem with bugs though, there's always something that could have been done to avoid it =)

          Pydantic works great in specific places, like validating user supplied data, but runtime checks as a replacement for static type checkers are not really feasible.

          Every caller would need to check that the function is being called correctly (number and position of args, kwarg names, etc) and every callee would need to manually validate that each arg passed matches some expected type.

          • By eternityforest 2025-05-2717:47

            Pydantic also takes CPU time and doesn't do anything till runtime.

            Type checking is real time in the IDE and lets you fix stuff before you waste fifteen minutes actually running it.

          • By belmont_sup 2025-05-283:01

            To be clear, I myself prefer sound type systems.

            But the reality is that teams have started with untyped Python, Ruby, and Javascript, have been productive, and now need to gradually add static types to remain productive.

            > Every caller would need to check that the function...

            The nice part here is where the gradual part comes in. As you are able to type more of your code, you're able to move where you add your runtime validation, and eventually you'll be able to move all validation to the edges of your system.

          • By Spivak 2025-05-2717:471 reply

            pydantic does have @validate_call for this use-case.

            • By guappa 2025-05-288:33

              Profile it and see how much slower it gets :)

      • By rtpg 2025-05-288:22

        In code where you really want to have these guarantees you turn on errors lke "no implicit any" in mypy and tighten the restrictions on the files you care about.

        You still have the "garbage in/garbage out" problem on the boundaries but at the very least you can improve confidence. And if you're hardcore... turn that on all over, turn off explicit Any, write wrappers around all of your untyped dependencies etc etc. You can get what you want, just might be a lot of work

      • By tclancy 2025-05-2719:24

        Yeah, I’m torn because, in my experience, gradual typing means the team members who want it implement it in their code and the others do not or are very lax in their typing. Some way of swapping between gradual and strict would be nice.

      • By guappa 2025-05-288:31

        15 seconds after doing "man mypy": --disallow-any-expr

        Less than it took you to write all that.

    • By yoyohello13 2025-05-2716:22

      Unless you're doing greenfield, gradual typing is really the only way. I've incorporated type hinting in several legacy Python code bases with mypy and really the only sensible way is to "opt-in" one module at a time. If pyrefly doesn’t support that I think its use will be pretty limited. Unless maybe they are going for the llm code gen angle. I could see a very fast and strict type checker being useful for llm generating python scripts.

    • By RandomBK 2025-05-2716:34

      It reminds me of the early days of Typescript rollout, which similarly focused on a smooth on-boarding path for existing large projects.

      More restrictive requirements (ie `noImplicitAny`) could be turned on one at a time before eventually flipping the `strict` switch to opt in to all the checks.

    • By tialaramex 2025-05-2718:38

      Although I'm paid to write (among other things) Python and not Rust, I would think of myself as a Rust programmer and to me the gradual guarantee also makes most sense.

    • By IshKebab 2025-05-2719:03

      This is a big turnoff for me. Half the point of adding type annotations to Python is to tame its error-prone dynamic typing. I want to know when I've done something stupid, even if it is technically allowed by Python itself.

      Hopefully they'll add some kind of no-implicit-any or "strict" mode for people who care about having working code...

HackerNews