# Basedpyright > have you ever wanted to adopt a new tool or enable new checks in an existing project, only to be immediately bombarded with thousands of errors you'd have to fix? baseline solves this problem by allow --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/benefits-over-pyright/baseline.md # baseline have you ever wanted to adopt a new tool or enable new checks in an existing project, only to be immediately bombarded with thousands of errors you'd have to fix? baseline solves this problem by allowing you to only report errors on new or modified code. it works by generating a baseline file keeping track of the existing errors in your project so that only errors in newly written or modified code get reported. to enable baseline, run `basedpyright --writebaseline` in your terminal or run the _"basedpyright: Write new errors to baseline"_ task in your editor. this will generate a baseline file at `./.basedpyright/baseline.json` in your project. you should commit this file so others working on your project can benefit from it too. you can customize the baseline file path [using the `baselineFile` setting](../configuration/config-files.md#baselineFile) or [using the `--baselinefile` CLI argument](../configuration/command-line.md#command-line). ## how often do i need to update the baseline file? by default, this file gets automatically updated as errors are removed over time in both the CLI and the language server. you should only manually run the write baseline command in the following scenarios: - a baselined error incorrectly resurfaces when updating unrelated code - you're enabling a new diagnostic rule and want to baseline all the new errors it reported if you need to suppress a diagnostic for another reason, consider using [a `# pyright: ignore` comment](../configuration/comments.md#prefer-pyrightignore-comments) instead. ## disabling automatic updates for baselined error removals if you want more control over when the baseline file is updated, use the `baselineMode` setting in either the [language server](../configuration/language-server-settings.md) or [the CLI](../configuration/command-line.md#option-2-baselinemode-experimental). for example, using the `discard` mode will prevent the baseline file from being automatically updated when baselined errors are removed. !!! tip if you disable automatic baseline updates in the language server, a potential alternative workflow for still having the baseline file updated with removed errors is to set up a [prek hook](../installation/prek-hook.md) in your project to run the basedpyright CLI. this would take care of error removals at commit time instead of during editor saves. ## how does it work? each baselined error is stored and matched by the following details: - the path of the file it's in (relative to the project root) - its diagnostic rule name (eg. `reportGeneralTypeIssues`) - the position of the error in the file (column only, which prevents errors from resurfacing when you add or remove lines in a file) no baseline matching strategy is perfect, so sometimes old errors can resurface when you're moving code around. if that happens, you can explicitly regenerate the baseline file by running the _"basedpyright: Write new errors to baseline"_ command in your editor or [via the command line](https://docs.basedpyright.com/latest/configuration/command-line/#regenerating-the-baseline-file) ## how is this different to `# pyright: ignore` comments? ignore comments are typically used to suppress a false positive or workaround some limitation in the type checker. baselining is a way to suppress many valid instances of an error across your whole project, to avoid the burden of having to update thousands of lines of old code just to adopt stricter checks on your new code. ## credit this is heavily inspired by [basedmypy](https://kotlinisland.github.io/basedmypy/baseline). --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/benefits-over-pyright/better-defaults.md # better defaults we believe that type checkers and linters should be as strict as possible by default. this ensures that the user aware of all the available rules so they can more easily make informed decisions about which rules they don't want enabled in their project. that's why the following defaults have been changed in basedpyright. ## `typeCheckingMode` used to be `"basic"`, but now defaults to `"recommended"`, which enables all diagnostic rules by default. this may seem daunting at first, however we have some solutions to address some concerns users may have with this mode: - less severe diagnostic rules are reported as warnings instead of errors. this reduces [alarm fatigue](https://en.wikipedia.org/wiki/Alarm_fatigue) while still ensuring that you're aware of all potential issues that basedpyright can detect. [`failOnWarnings`](../configuration/config-files.md#failOnWarnings) is also enabled by default in this mode, which causes the CLI to exit with a non-zero exit code if any warnings are detected. you disable this behavior by setting `failOnWarnings` to `false`. - we support [baselining](./baseline.md) to allow for easy adoption of more strict rules in existing codebases. - we've added a new setting, [`allowedUntypedLibraries`](../configuration/config-files.md#allowedUntypedLibraries) which allows you to turn off rules about unknown types on a per-module basis, which can be useful when working with third party packages that aren't properly typed. ## `pythonPlatform` used to assume that the operating system pyright is being run on is the only operating system your code will run on, which is rarely the case. in basedpyright, `pythonPlatform` defaults to `All`, which assumes your code can run on any operating system. ## default value for `pythonPath` configuring your python interpreter in pyright is needlessly confusing. if you aren't using vscode or you aren't running it from inside a virtual environment, you'll likely encounter errors for unresolved imports as a result of pyright using the wrong interpreter. to fix this you'd have to use the `venv` and `venvPath` settings which are unnecessarily difficult to use. for example `venv` can only be set in either [the config file](../configuration/config-files.md) or [the language server settings](../configuration/language-server-settings.md) but `venvPath` can only be set in the language server settings or [the command line](../configuration/command-line.md). most of the time your virtual environment is located in the same spot: a folder named `.venv` in your project root. this is the case if you're using [uv](https://docs.astral.sh/uv/) (which you should be, it's far better than any alternative). so why not just check for this known common venv path and use that by default? that's exactly what basedpyright does. if neither `pythonPath` or `venvPath`/`venv` are set, basedpyright will check for a venv at `./.venv` and if it finds one, it will use its python interpreter as the value for `pythonPath`. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/benefits-over-pyright/dataclass-transform.md # Extra `dataclass_transform` features `typing.dataclass_transform` is a bit strange. It is a compromise to fulfill the urgent need to support dataclass-like objects (mostly in `pydantic` and `attrs`) to some extent without adding a generic and flexible mechanism for defining your own `dataclass_transform`. If your use case deviates from what `dataclass_transform` explicitly supports, it can be difficult or impossible to work around that. Luckily, `dataclass_transform` accepts arbitrary keyword arguments at runtime to allow type checkers to add their own little hacks on top of the standard ones. `basedpyright` currently supports these options: - `skip_replace`: setting this to `True` disables the synthesis of the `__replace__` method. Pyright (and basedpyright) assumes that classes produced by a dataclass transform define a `__replace__` method, as long as the class is not marked with `init=False` and does not define a custom `__init__` method. You may want a `dataclass`-like decorator or metaclass that does not support `__replace__` at all. In particular, `__replace__` messes with the variance inference of frozen dataclasses (see [this discourse thread](https://discuss.python.org/t/make-replace-stop-interfering-with-variance-inference/96092) for details). Here's a recipe you can use to work around this: ```py from typing import dataclass_transform @dataclass_transform(skip_replace=True, frozen_default=True) def frozen[T: type](t: T) -> T: return dataclass(frozen=True, slots=True)(t) # check that this enables covariance: @frozen class Box[T]: value: T box1: Box[str] = Box("test") box2: Box[str | int] = box1 ``` All of these options require the [`enableBasedFeatures`](../configuration/config-files.md#enableBasedFeatures) configuration option to be set to `true`. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/benefits-over-pyright/errors-on-invalid-configuration.md # errors on invalid configuration in pyright, if you have any invalid configuration, it may or may not print a warning to the console, then it will continue type-checking and the exit code will be 0 as long as there were no type errors: ```toml title="pyproject.toml" [tool.pyright] mode = "strict" # wrong! the setting you're looking for is called `typeCheckingMode` ``` in this example, it's very easy for errors to go undetected because you thought you were on strict mode, but in reality pyright just ignored the setting and silently continued type-checking on "basic" mode. to solve this problem, basedpyright will exit with code 3 on any invalid config when using the CLI, and show an error notification when using the language server. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/benefits-over-pyright/fixes-for-rules.md # fixes for existing diagnostic rules ## `reportRedeclaration` and `reportDuplicateImport` pyright does not report redeclarations if the redeclaration has the same type: ```py foo: int = 1 foo: int = 2 # no error ``` nor does it care if you have a duplicated import in multiple different `import` statements, or in aliases: ```py from foo import bar from bar import bar # no error from baz import foo as baz, bar as baz # no error ``` basedpyright solves both of these problems by always reporting an error on a redeclaration or an import with the same name as an existing import. ## `reportUnreachable` [`reportUnreachable`](../configuration/config-files.md#reportUnreachable) was the first new diagnostic rule that was added to basedpyright, however this rule was recently added to pyright too, but their version is far less safe. specifically, it doesn't report an error on `sys.version_info` or `sys.platform` checks, which are by far the most common cases where pyright considers code to be unreachable. the reason we added `reportUnreachable` to basedpyright was not just to identify code that will never be reached, but mainly to identify code that will _not be type checked._ !!! example assuming that you're running python 3.13 or above during development: ```py if sys.version_info < (3, 13): 1 + "" # no error ``` normally `1 + ""` would be reported as a type error but pyright doesn't complain here, because unreachable code doesn't get type checked at all! this is bad of course, because chances are if your code contains an `if` statement like this, you're expecting it to be run on multiple different python versions. ## `reportInvalidTypeVarUse` pyright incorrectly reports an error when a function contains a type var only in the return position, in cases where there's no valid alternative: !!! example ```py # error: TypeVar "T" appears only once in generic function signature # Use "object" instead def empty_list[T]() -> list[T]: return [] # using `object` as suggested will cause an error here: foo: list[int] = empty_list() ``` basedpyright will not report this error if the type var is used only in the return position, as long as it's possible for the function to safely return that type at runtime. however the error will still be reported if the function's return type is just the type var itself, for example: !!! example ```py # error: TypeVar "T" appears only once in generic function signature # Use "Never" instead def fn[T]() -> T: ... foo: int = fn() ``` here, pyright will incorrectly suggest returning `object` instead, of `Never`. there's no way for this function to safely return a `T`, but changing its return type to `object` will cause an error on the usage. basedpyright will correctly suggest changing the return type to `Never` instead. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/benefits-over-pyright/improved-ci-integration.md # improved integration with CI platforms regular pyright has third party integrations for github actions and gitlab, but they are difficult to install/set up. these integrations are built into basedpyright, which makes them much easier to use. ## github actions basedpyright automatically detects when it's running in a github action, and modifies its output to use [github workflow commands](https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions). this means errors will be displayed on the affected lines of code in your pull requests automatically: ![image](https://github.com/DetachHead/basedpyright/assets/57028336/cc820085-73c2-41f8-ab0b-0333b97e2fea) this is an improvement to regular pyright, which requires you to use a [third party action](https://github.com/jakebailey/pyright-action) that [requires boilerplate to get working](https://github.com/jakebailey/pyright-action?tab=readme-ov-file#use-with-a-virtualenv). basedpyright just does it automatically without you having to do anything special: ```yaml title=".github/workflows/your_workflow.yaml" jobs: check: steps: - run: ... # checkout repo, install dependencies, etc - run: basedpyright # no additional arguments required. it automatically detects if it's running in a github action ``` ## gitlab code quality reports the `--gitlabcodequality` argument will output a [gitlab code quality report](https://docs.gitlab.com/ee/ci/testing/code_quality.html) which shows up on merge requests: ![image](https://github.com/DetachHead/basedpyright/assets/57028336/407f0e61-15f2-4d04-b235-1946d49fd180) to enable this in your gitlab CI, just specify a file path to output the report to, and in the `artifacts.reports.codequality` section of your `.gitlab-ci.yml` file: ```yaml title=".gitlab-ci.yml" basedpyright: script: basedpyright --gitlabcodequality report.json artifacts: reports: codequality: report.json ``` --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/benefits-over-pyright/improved-generic-narrowing.md # improved type narrowing when narrowing a type using an `isinstance` check, there's no way for the type checker to narrow its type variables, so pyright just narrows them to ["Unknown"](../usage/mypy-comparison.md#unknown-type-and-strict-mode): ```py def foo(value: object): if isinstance(value, list): reveal_type(value) # list[Unknown] ``` this makes sense in cases where the generic is invariant and there's no other way to represent any of its possibilities. for example if it were to be narrowed to `list[object]`, you wouldn't be able to assign `list[int]` to it. however in cases where the generic is covariant, contravariant, or uses constraints, it can be narrowed more accurately. basedpyright introduces the new [`strictGenericNarrowing`](../configuration/config-files.md#strictGenericNarrowing) setting to address this. the following sections explain how this new behavior effects different types of generics. ## narrowing of covariant generics when a type variable is covariant, its widest possible type is its bound, which defaults to `object`. when `strictGenericNarrowing` is enabled, if a generic is covariant and does not have a bound, it gets narrowed to `object` instead of "Unknown": ```py T_co = TypeVar("T_co", covariant=True) class Foo(Generic[T_co]): ... def foo(value: object): if isinstance(value, Foo): reveal_type(value) # Foo[object] ``` if the generic does have a bound, it gets narrowed to that bound instead: ```py T_co = TypeVar("T_co", bound=int | str, covariant=True) class Foo(Generic[T_co]): ... def foo(value: object): if isinstance(value, Foo): reveal_type(value) # Foo[int | str] ``` ## narrowing of contravariant generics when a type variable is contravariant its widest possible type is `Never`, so when `strictGenericNarrowing` is enabled, contravariant generics get narrowed to `Never` instead of "Unknown": ```py T_contra = TypeVar("T_contra", contravariant=True) class Foo(Generic[T_contra]): ... def foo(value: object): if isinstance(value, Foo): reveal_type(value) # Foo[Never] ``` ## narrowing of constraints when a type variable uses constraints, the rules of variance do not apply - see [this issue](https://github.com/DetachHead/basedpyright/issues/893) for more information. instead, a constraint declares that the generic must be resolved to be exactly one of the types specified. when `strictGenericNarrowing` is enabled, constrained generics are narrowed to a union of all possibilities: ```py class Foo[T: (int, str)]: ... def foo(value: object): if isinstance(value, Foo): reveal_type(value) # Foo[int] | Foo[str] ``` this also works when there's more than one constrained type variable - it creates a union of all possible combinations: ```py class Foo[T: (int, str), U: (float, bytes)]: ... def foo(value: object): if isinstance(value, Foo): reveal_type(value) # Foo[int, float] | Foo[int, bytes] | Foo[str, float] | Foo[str, bytes] ``` --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/benefits-over-pyright/improved-reportUninitializedInstanceVariable.md # improved logic for detecting uninitialized instance variables in pyright, the [`reportUninitializedInstanceVariable`](../configuration/config-files.md#reportUninitializedInstanceVariable) rule will report cases where an instance attribute is defined but not initialized: ```py class A: x: int # error: Instance variable "x" is not initialized in the class body or __init__ method def reset(self): # there's no guarantee this will be called so it doesn't count self.x = 3 ``` however, it's very common to write constructors that call a "reset" method. pyright doesn't account for this, so `reportUninitializedInstanceVariable` is still reported even though the attribute will always be initialized. basedpyright checks the class's `__init__` method for calls to other methods that may initialize instance attributes to eliminate such false positives: ```py class A: x: int # error in pyright, no error in basedpyright def __init__(self) -> None: self.reset() def reset(self): self.x = 3 ``` ## limitations for performance reasons, this only checks one call deep from the `__init__` method, so the following class will still report an error: ```py class A: x: int # reportUninitializedInstanceVariable error def __init__(self) -> None: self.initialize() def initialize(self): self.reset() def reset(self): self.x = 3 ``` although this compromise is not ideal, we've found that this change still eliminates a very common source of false positives for this rule. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/benefits-over-pyright/improved-translations.md # localization fixes ## improved translations the translations in pyright come from microsoft's localization team, who are not programmers. not only does this result in poor quality translations, but microsoft also doesn't accept contributions to fix them ([more info here](https://github.com/microsoft/pyright/issues/7441#issuecomment-1987027067)). we accept translation fixes in basedpyright. [see the localization guidelines](../development/localization.md) for information on how to contribute. ## fixed country code format for linux in pyright, you can configure the locale using [environment variables](../configuration/config-files.md#locale-configuration) in `"en-US"` format. this format is commonly used on windows, but linux uses the `"en_US"` format instead. unlike pyright, basedpyright supports both formats. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/benefits-over-pyright/language-server-improvements.md # language server improvements in addition to the [pylance exclusive features](./pylance-features.md), basedpyright also contains some additional improvements to the language server that aren't available in pyright or pylance. ## autocomplete improvements ### automatically inserting the `@override` decorator autocomplete suggestions for method overrides will automatically add the `@override` decorator: ![](./override-decorator-completions.gif) !!! info "for users targeting python <=3.11" since the `@typing.override` decorator was introduced in python 3.12, this functionality is only enabled if either: - you are targeting python 3.12 or above (see [`pythonVersion`](../configuration/config-files.md/#environment-options)) - you have enabled [`basedpyright.analysis.useTypingExtensions`](../configuration/language-server-settings.md#based-settings) !!! warning "important information for library developers" using `typing_extensions` creates a runtime dependency on the [`typing_extensions`](https://pypi.org/project/typing-extensions/) pypi package, so you must declare it as a project dependency. this is why `basedpyright.analysis.useTypingExtensions` is disabled by default to prevent users from unknowingly adding a new dependency to their project. such mistakes often go undetected until your package is released and causes a runtime error for your users because the module may be available in dev dependencies but not production dependencies. (we recommend using [DTach](https://detachhead.github.io/dtach/usage/commands/#tach-check-external) to detect issues like these) ### improved completions for `Literal`s in pyright/pylance, you get completions for `Literal`s that contain strings: ![](literal_str_completions.png) but in basedpyright, this also works with `Literal`s that contain other types, such as `int`, `bool` and `Enum` values: ![](other_literal_completions.png) ### improved completions for enums unlike pyright/pylance, basedpyright also supports completions for enum values: ![](enum_completions.png) this also works for other types of enums such as `IntEnum` and `StrEnum` ## improved diagnostic severity system in pyright, certain diagnostics such as unreachable and unused code are always reported as a hint and cannot be disabled even when the associated diagnostic rule is disabled (and in the case of unreachable code, [the diagnostic is not reported in most cases where the hint is reported](./fixes-for-rules.md#reportunreachable)). basedpyright introduces a new [`"hint"`](../configuration/config-files.md#diagnostic-categories) diagnostic category which can be applied to any diagnostic rule, and can be disabled just like all other diagnostic rules. some diagnostics use a diagnostic tag (unused or deprecated) if your IDE supports them: ```toml title="pyproject.toml" [tool.basedpyright] reportUnreachable = 'hint' reportUnusedParameter = 'hint' reportUnusedCallResult = 'hint' reportDeprecated = 'hint' ``` here's how they look in vscode: ![](diagnostic-tags.png) these diagnostic tags will still be present if the rule's diagnostic category is set to `"warning"`, `"error"` or `"information"`, but unlike pyright, they are disabled entirely if the rule's diagnostic category is set to `"none"`. ## deprecated completions pyright/pylance supports strikethrough diagnostic tags on usages of deprecated symbols: ![](deprecated-diagnostic-tag.png) but basedpyright also shows them in completions: ![](deprecated-completion.png) --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/benefits-over-pyright/new-diagnostic-rules.md # new diagnostic rules this section lists all of the new diagnostic rules that are exclusive to basedpyright and the motivation behind them. for a complete list of all diagnostic rules, [see here](../configuration/config-files.md#type-check-rule-overrides). ## `reportAny` pyright has a few options to ban "Unknown" types such as `reportUnknownVariableType`, `reportUnknownParameterType`, etc. but "Unknown" is not a real type, rather a distinction pyright uses used to represent `Any`s that come from untyped code or unfollowed imports. if you want to ban all kinds of `Any`, pyright has no way to do that: ```py def foo(bar, baz: Any) -> Any: print(bar) # error: unknown type print(baz) # no error ``` basedpyright introduces the `reportAny` option, which will report an error on usages of anything typed as `Any`: ```py def foo(baz: Any) -> Any: print(baz) # error: reportAny ``` ## `reportExplicitAny` similar to [`reportAny`](#reportany), however this rule bans usages of the `Any` type itself rather than expressions that are typed as `Any`: ```py def foo(baz: Any) -> Any: # error: reportExplicitAny print(baz) # error: reportAny ``` ## `reportIgnoreCommentWithoutRule` it's good practice to specify an error code in your `pyright: ignore` comments: ```py # pyright: ignore[reportUnreachable] ``` this way, if the error changes or a new error appears on the same line in the future, you'll get a new error because the comment doesn't account for the other error. !!! note `type: ignore` comments ([`enableTypeIgnoreComments`](../configuration/config-files.md#enableTypeIgnoreComments)) are unsafe and are disabled by default (see [#330](https://github.com/DetachHead/basedpyright/issues/330) and [#55](https://github.com/DetachHead/basedpyright/issues/55)). we recommend using `pyright: ignore` comments instead. ## `reportPrivateLocalImportUsage` pyright's `reportPrivateImportUsage` rule only checks for private imports of third party modules inside `py.typed` packages. but there's no reason your own code shouldn't be subject to the same restrictions. to explicitly re-export something, give it a redundant alias [as described in the "Stub Files" section of PEP484](https://peps.python.org/pep-0484/#stub-files) (although it only mentions stub files, other type checkers like mypy have also extended this behavior to source files as well): ```py # foo.py from .some_module import a # private import from .some_module import b as b # explicit re-export ``` ```py # bar.py # reportPrivateLocalImportUsage error, because `a` is not explicitly re-exported by the `foo` module: from foo import a # no error, because `b` is explicitly re-exported: from foo import b ``` ## `reportImplicitRelativeImport` pyright allows invalid imports such as this: ```py # ./module_name/foo.py: ``` ```py # ./module_name/bar.py: import foo # wrong! should be `import module_name.foo` or `from module_name import foo` ``` this may look correct at first glance, and will work when running `bar.py` directly as a script, but when it's imported as a module, it will crash: ```py # ./main.py: import module_name.bar # ModuleNotFoundError: No module named 'foo' ``` the new `reportImplicitRelativeImport` rule bans imports like this. if you want to do a relative import, the correct way to do it is by importing it from `.` (the current package): ```py # ./module_name/bar.py: from . import foo ``` ## `reportInvalidCast` most of the time when casting, you want to either cast to a narrower or wider type: ```py foo: int | None cast(int, foo) # narrower type cast(object, foo) # wider type ``` but pyright doesn't prevent casts to a type that doesn't overlap with the original: ```py foo: int cast(str, foo) ``` in this example, it's impossible for `foo` to be a `str` if it's also an `int`, because the `int` and `str` types do not overlap. the `reportInvalidCast` rule will report invalid casts like these. !!! note "note about casting with `TypedDict`s" a common use case of `cast` is to convert a regular `dict` into a `TypedDict`: ```py foo: dict[str, int | str] bar = cast(dict[{"foo": int, "bar": str}], foo) ``` unfortunately, this will cause a `reportInvalidCast` error when this rule is enabled, because although at runtime `TypedDict` is a `dict`, type checkers treat it as an unrelated subtype of `Mapping` that doesn't have a `clear` method, which would break its type-safety if it were to be called on a `TypedDict`. this means that although casting between them is a common use case, `TypedDict`s and `dict`s technically do not overlap. ## `reportUnsafeMultipleInheritance` multiple inheritance in python is awful: ```py class Foo: def __init__(self): super().__init__() class Bar: def __init__(self): ... class Baz(Foo, Bar): ... Baz() ``` in this example, `Baz()` calls `Foo.__init__`, and the `super().__init__()` in `Foo` now calls to `Bar.__init__` even though `Foo` does not extend `Bar`. this is complete nonsense and very unsafe, because there's no way to statically know what the super class will be. pyright has the `reportMissingSuperCall` rule which, for this reason, complains even when your class doesn't have a base class. but that sucks because there's no way to know what arguments the unknown `__init__` takes, which means even if you do add a call to `super().__init__()` you have no clue what arguments it may take. so this rule is super annoying when it's enabled, and has very little benefit because it barely makes a difference in terms of safety. `reportUnsafeMultipleInheritance` bans multiple inheritance when there are multiple base classes with an `__init__` or `__new__` method, as there's no way to guarantee that all of them will get called with the correct arguments (or at all). this allows `reportMissingSuperCall` to be more lenient. ie. when `reportUnsafeMultipleInheritance` is enabled, missing `super()` calls will only be reported on classes that actually have a base class. ## `reportUnusedParameter` pyright will report an unused diagnostic on unused function parameters: ```py def print_value(value: str): # "value" is not accessed print("something else") ``` but this just greys out the parameter instead of actually reporting it as an error. basedpyright introduces a new `reportUnusedParameter` diagnostic rule which supports all the severity options (`"error"`, `"warning"` and `"none"`) as well as `"hint"`, which is the default behavior in pyright. ## `reportImplicitAbstractClass` abstract classes in python are declared using a base class called `ABC`, and were designed to be validated at runtime rather than by a static type checker. this means that there's no decent way to ensure on a class's definition that it implements all of the required abstract methods: ```py from abc import ABC, abstractmethod class AbstractFoo(ABC): @abstractmethod def foo(self): ... # no error here even though you haven't implemented `foo` because pyright assumes you want this class to also be abstract class FooImpl(AbstractFoo): def bar(self): print("hi") foo = FooImpl() # error ``` this isn't ideal, because you may not necessarily be instantiating the class (eg. if you're developing a library and expect the user to import and instantiate it), meaning this error will go undetected. the `reportImplicitAbstractClass` rule bans classes like this that are implicitly abstract just because their base class is also abstract. it enforces that the class also explicitly extends `ABC` as well, to indicate that this is intentional: ```py # even though Foo also extends ABC and this is technically redundant, it's still required to tell basedpyright that you # are intentionally keeping this class abstract class FooImpl(AbstractFoo, ABC): def bar(self): print("hi") ``` ## `reportIncompatibleUnannotatedOverride` pyright's `reportIncompatibleVariableOverride` rule checks for class attribute overrides with an incompatible type: ```py class A: value: int = 1 class B(A): value: int | None = None # error, because `int | None` is not compatible with `int` ``` but it does not report an error if the attribute in the base class does not have a type annotation: ```py class A: value = 1 # inferred as `int` class B(A): value = None # no error, even though the type on the base class is `int` and the type here is `None` ``` this rule will report an error in such cases. !!! warning the reason pyright does not check for cases like this is allegedly because it would be "very slow" to do so. in our testing, we have not noticed any performance impact with this rule enabled, but just in case, it's disabled by default in [the "recommended" diagnostic ruleset](../configuration/config-files.md#recommended-and-all) for now. we intend to enable this rule by default in the future once we are more confident with it. please [open an issue](https://github.com/DetachHead/basedpyright/issues/new?template=issue.yaml) if you notice basedpyright running noticeably slower with this rule enabled. if you encounter any performance issues with this rule, you may want to disable it and use [`reportUnannotatedClassAttribute`](#reportunannotatedclassattribute) instead. ## `reportUnannotatedClassAttribute` since pyright does not warn when a class attribute without a type annotation is overridden with an incompatible type (see [`reportIncompatibleUnannotatedOverride`](#reportincompatibleunannotatedoverride)), you may want to enforce that all class attributes have a type annotation. this can be useful as an alternative to `reportIncompatibleUnannotatedOverride` if: - you are developing a library that you want to be fully type-safe for users who may be using pyright instead of basedpyright - you encountered performance issues with `reportIncompatibleUnannotatedOverride` - you prefer explicit type annotations to reduce the risk of introducing unexpected breaking changes to your API `reportUnannotatedClassAttribute` will report an error on all unannotated class attributes that can potentially be overridden (ie. not final or private), even if they don't override an attribute on a base class with an incompatible type. ## `reportInvalidAbstractMethod` pyright ignores methods decorated with `@abstractmethod` if the class is not abstract: ```py from abc import abstractmethod class Foo: @abstractmethod def foo(): ... _ = Foo() # no error ``` this is allegedly for [performance reasons](https://github.com/microsoft/pyright/issues/5026#issuecomment-1526479622), but basedpyright's `reportInvalidAbstractMethod` rule is reported on the method definition instead of the usage, so it doesn't have to check every method when instantiating every non-abstract class. it also just makes more sense to report the error on the method definition anyway. methods decorated with `@abstractmethod` on classes that do not extend `ABC` will not raise a runtime error if they are instantiated, making them less safe. ## `reportSelfClsDefault` Pyright allows specifying a default value for `self` in instance methods and `cls` in class methods. ```py class Foo: def foo(self=1): ... ``` This is almost certainly a mistake, so `reportSelfClsDefault` warns about it. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/benefits-over-pyright/pylance-features.md # pylance features basedpyright re-implements some of the features that microsoft made exclusive to pylance, which is microsoft's closed-source vscode extension built on top of the pyright language server with some additional exclusive functionality ([see the pylance FAQ for more information](https://github.com/microsoft/pylance-release/blob/main/FAQ.md#what-features-are-in-pylance-but-not-in-pyright-what-is-the-difference-exactly)). the following features have been re-implemented in basedpyright's language server, meaning they are no longer exclusive to vscode. you can use any editor that supports the [language server protocol](https://microsoft.github.io/language-server-protocol/). for more information on installing pyright in your editor of choice, see [the installation instructions](../installation/ides.md). ## jupyter notebooks just like pylance, the basedpyright language server works with jupyter notebooks: ![](jupyter.png) however unlike pylance, basedpyright also supports type-checking them using the CLI: ``` >basedpyright c:\project\asdf.ipynb - cell 1 c:\project\asdf.ipynb:1:1:12 - error: Type "Literal['']" is not assignable to declared type "int" "Literal['']" is not assignable to "int" (reportAssignmentType) c:\project\asdf.ipynb - cell 2 c:\project\asdf.ipynb:2:1:12 - error: Type "int" is not assignable to declared type "str" "int" is not assignable to "str" (reportAssignmentType) 2 errors, 0 warnings, 0 notes ``` ## code actions ### import suggestions pyright only supports import suggestions as autocomplete suggestions, but not as quick fixes (see [this issue](https://github.com/microsoft/pyright/issues/4263#issuecomment-1333987645)). basedpyright re-implements pylance's import suggestion code actions: ![image](https://github.com/DetachHead/basedpyright/assets/57028336/a3e8a506-5682-4230-a43c-e815c84889c0) ### ignore comments basedpyright also re-implements pylance's code actions for adding `# pyright: ignore` comments: ![](./ignore-comment-code-action.png) !!! note `# type: ignore` comments are not supported by this code action because they are discouraged, [see here](../configuration/comments.md#prefer-pyrightignore-comments) for more information. ## semantic highlighting | before | after | | --------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | | ![image](https://github.com/DetachHead/basedpyright/assets/57028336/f2977463-b828-470e-8094-ca437a312350) | ![image](https://github.com/DetachHead/basedpyright/assets/57028336/e2c7999e-28c0-4a4c-b975-f63575ec3404) | basedpyright re-implements pylance's semantic highlighting along with some additional improvements: - supports [the new `type` keyword in python 3.12](https://peps.python.org/pep-0695/) - `Final` variables are coloured as read-only initial implementation of the semantic highlighting provider was adapted from the [pyright-inlay-hints](https://github.com/jbradaric/pyright-inlay-hints) project. ## inlay hints ![image](https://github.com/DetachHead/basedpyright/assets/57028336/41ed93e8-04e2-4163-a1be-c9ec8f3d90df) basedpyright contains several improvements and bug fixes to the original implementation adapted from [pyright-inlay-hints](https://github.com/jbradaric/pyright-inlay-hints). basedpyright also supports double-clicking to insert inlay hints. unlike pylance, this also works on `Callable` types: ![](./double-click-inlay-hint.gif) ## docstrings for compiled builtin modules many of the builtin modules are written in c, meaning the pyright language server cannot statically inspect and display their docstrings to the user. unfortunately they are also not available in the `.pyi` stubs for these modules, as [the typeshed maintainers consider it to be too much of a maintenance nightmare](https://github.com/python/typeshed/issues/4881#issuecomment-1275775973). pylance works around this problem by running a "docstring scraper" script on the user's machine, which imports compiled builtin modules, scrapes all the docstrings from them at runtime, then saves them so that the language server can read them. however this isn't ideal for a few reasons: - only docstrings for modules and functions available on the user's current OS and python version will be generated. so if you're working on a cross-platform project, or code that's intended to be run on multiple versions of python, you won't be able to see docstrings for compiled builtin modules that are not available in your current python installation. - the check to determine whether a builtin object is compiled is done at the module level, meaning modules like `re` and `os` which have python source files but contain re-exports of compiled functions, are treated as if they are entirely written in python. this means many of their docstrings are still missing in pylance. - it's (probably) slower because these docstrings need to be scraped either when the user launches vscode, or when the user hovers over a builtin class/function (disclaimer: i don't actually know when it runs, because pylance is closed source) basedpyright solves all of these problems by using [docify](https://github.com/AThePeanut4/docify) to scrape the docstrings from all compiled builtin functions/classes for all currently supported python versions and all platforms (macos, windows and linux), and including them in the default typeshed stubs that come with the basedpyright package. ### examples here's a demo of basedpyright's builtin docstrings when running on windows, compared to pylance: #### basedpyright ![](https://github.com/DetachHead/basedpyright/assets/57028336/df4f4916-4b5e-4367-bd88-4ddadf283780) #### pylance ![](https://github.com/DetachHead/basedpyright/assets/57028336/15a38478-8405-419c-a6e1-3c0801808896) ### generating your own stubs with docstrings basedpyright uses [docify](https://github.com/AThePeanut4/docify) to add docstrings to its stubs. if you have third party compiled modules and you want basedpyright to see its docstrings, you can do the same: ``` python -m docify path/to/stubs/for/package --in-place ``` or if you're using a different version of typeshed, you can use the `--if-needed` argument to replicate how basedpyright's version of typeshed is generated for your current platform and python version: ``` python -m docify path/to/typeshed/stdlib --if-needed --in-place ``` ## renaming packages and modules when renaming a package or module, basedpyright will update all usages to the new name, just like pylance does: ![](https://github.com/user-attachments/assets/6207fe90-027a-4227-a1ed-d2c4406ad38c) ## fixed parsing of multi-line parameter descriptions in docstrings in pyright, if a parameter description spans more than one line, only the first line is shown when hovering over the parameter: ![alt text](./broken-docstring-parameter-descriptions.png) this issue is fixed in pylance, but the fix was never ported to pyright, so we had to fix it ourselves: ![](./fixed-docstring-parameter-descriptions.png) ## automatic conversion to f-string when typing `{` inside a string basedpyright implements the `autoFormatStrings` setting from pylance ([`basedpyright.analysis.autoFormatStrings`](../configuration/language-server-settings.md#based-settings)): ![](./autoformatstrings.gif) ## hover and "go to definition" on operators pylance supports "go to definition" on some operators. basedpyright supports this and takes it a step further by also showing hover information as well: ![](./operators.png) ## go to implementations basedpyright supports "Go to Implementations" / "Find All Implementations" from pylance: ![](./implementations.png) ## Pylance features missing from basedpyright See the [open issues](https://github.com/DetachHead/basedpyright/issues?q=is:issue+is:open+pylance+label:%22pylance+parity%22) related to feature parity with Pylance. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/benefits-over-pyright/pypi-package-vscode-pinning.md # pypi package and version pinning ## pypi package - no nodejs required pyright is only published as an npm package, which requires you to install nodejs. [there is an unofficial version on pypi](https://pypi.org/project/pyright/), but by default it just installs node and the npm package the first time you invoke the cli, [which is quite flaky](https://github.com/RobertCraigie/pyright-python/issues/231). python developers should not be expected to install nodejs in order to typecheck their python code. although pyright itself is written in typescript and therefore depends on nodejs, it's an implementation detail that should be of no concern to the user. a command-line tool intended for python developers should not have to be installed and managed by a package manager for a completely different language. this is why basedpyright is [officially published on pypi](https://pypi.org/project/basedpyright/), which comes bundled with the npm package using [nodejs-wheel](https://github.com/njzjz/nodejs-wheel). see [the installation instructions](../installation/command-line-and-language-server.md#pypi-package) for more information. ## ability to pin the version used by vscode in pyright, if the vscode extension gets updated, you may see errors in your project that don't appear in the CI, or vice-versa. see [this issue](https://github.com/microsoft/pylance-release/issues/5207). basedpyright fixes this problem by adding an `importStrategy` option to the extension, which defaults to looking in your project for the [basedpyright pypi package](#pypi-package-no-nodejs-required). --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/configuration/command-line.md ## Command-Line Usage: basedpyright [options] [files...] [^1] basedpyright can be run as either a language server or as a command-line tool. The command-line version allows for the following options: | Flag | Description | | :-------------------------------------- | :--------------------------------------------------- | | --createstub `` | Create type stub file(s) for import | | --dependencies | Emit import dependency information | | -h, --help | Show help message | | --ignoreexternal | Ignore external imports for --verifytypes | | --level | Minimum diagnostic level (error or warning) | | --outputjson | Output results in JSON format | | --gitlabcodequality | Output results to a gitlab code quality report | | --writebaseline | Write new errors to the baseline file | | --baselinefile `` | Path to the baseline file to be used [^2] | | --baselinemode `` | Specify the [baseline mode](#option-2-baselinemode-experimental)| | -p, --project `` | Use the configuration file at this location | | --pythonpath `` | Path to the Python interpreter [^3] | | --pythonplatform `` | Analyze for platform (Darwin, Linux, Windows) | | --pythonversion `` | Analyze for version (3.3, 3.4, etc.) | | --skipunannotated | Skip type analysis of unannotated functions | | --stats | Print detailed performance stats | | -t, --typeshedpath `` | Use typeshed type stubs at this location [^4] | | --threads | Use up to N threads to parallelize type checking [^5] | | -v, --venvpath `` | Directory that contains virtual environments [^6] | | --verbose | Emit verbose diagnostics | | --verifytypes `` | Verify completeness of types in py.typed package | | --version | Print pyright version and exit | | --warnings | Use exit code of 1 if warnings are reported [^7] | | -w, --watch | Continue to run and watch for changes [^8] | | - | Read file or directory list from stdin | [^1]: If specific files are specified on the command line, it overrides the files or directories specified in the pyrightconfig.json or pyproject.toml file. [^2]: Defaults to `./.basedpyright/baseline.json` [^3]: This option is the same as the language server setting `python.pythonPath`. It cannot be used with `--venvpath`. The `--pythonpath` option is recommended over `--venvpath` in most cases. For more details, refer to the [import resolution](../usage/import-resolution.md#configuring-your-python-environment) documentation. [^4]: Pyright has built-in typeshed type stubs for Python stdlib functionality. To use a different version of typeshed type stubs, specify the directory with this option. [^5]: This feature is experimental. If thread count is > 1, multiple copies of pyright are executed in parallel to type check files in a project. If no thread count is specified, the thread count is based on the number of available logical processors (if at least 4) or 1 (if less than 4). [^6]: `--venvpath` is discouraged in basedpyright. [see here](../benefits-over-pyright/better-defaults.md#default-value-for-pythonpath) for more info. This option is the same as the language server setting `python.venvPath`. It used in conjunction with configuration file, which can refer to different virtual environments by name. For more details, refer to the [configuration](./config-files.md) and [import resolution](../usage/import-resolution.md#configuring-your-python-environment) documentation. This allows a common config file to be checked in to the project and shared by everyone on the development team without making assumptions about the local paths to the venv directory on each developer’s computer. [^7]: `--warnings` is equivalent to [`failOnWarnings`](./config-files.md#failOnWarnings), which is enabled by default in basedpyright, meaning this option is redundant unless you explicitly disable `failOnWarnings`. [see here](../benefits-over-pyright/better-defaults.md#typecheckingmode) for more information about this decision [^8]: When running in watch mode, pyright will reanalyze only those files that have been modified. These “deltas” are typically much faster than the initial analysis, which needs to analyze all files in the source tree. ## Pyright Exit Codes | Exit Code | Meaning | | :---------- | :--------------------------------------------------------------- | | 0 | No errors reported | | 1 | One or more errors reported | | 2 | Fatal error occurred with no errors or warnings reported | | 3 | Config file could not be read or parsed | | 4 | Illegal command-line parameters specified | ## JSON Output If the “--outputjson” option is specified on the command line, diagnostics are output in JSON format. The JSON structure is as follows: ```ts interface PyrightJsonResults { version: string, time: string, generalDiagnostics: Diagnostic[], summary: { filesAnalyzed: number, errorCount: number, warningCount: number, informationCount: number, timeInSec: number } } ``` Each Diagnostic is output in the following format: ```ts interface Diagnostic { file: string, cell?: string // if the file is a jupyter notebook severity: 'error' | 'warning' | 'information', message: string, rule?: string, range: { start: { line: number, character: number }, end: { line: number, character: number } } } ``` Diagnostic line and character numbers are zero-based. Not all diagnostics have an associated diagnostic rule. Diagnostic rules are used only for diagnostics that can be disabled or enabled. If a rule is associated with the diagnostic, it is included in the output. If it’s not, the rule field is omitted from the JSON output. ## Gitlab code quality report the `--gitlabcodequality` argument will output a [gitlab code quality report](https://docs.gitlab.com/ee/ci/testing/code_quality.html). to enable this in your gitlab CI, you need to specify a path to the code quality report file to this argument, and in the `artifacts` section in your gitlab CI file: ```yaml basedpyright: script: basedpyright --gitlabcodequality report.json artifacts: reports: codequality: report.json ``` ## Regenerating the baseline file if you're using [baseline](../benefits-over-pyright/baseline.md), as baselined errors are removed from your code, the CLI will automatically update the baseline file to remove them: ``` >basedpyright updated ./.basedpyright/baseline.json with 200 errors (went down by 5) 0 errors, 0 warnings, 0 notes ``` the `--writebaseline` argument is only required if you are intentionally writing new errors to the baseline file. for more information about when to use this argument, [see here](../benefits-over-pyright/baseline.md#how-often-do-i-need-to-update-the-baseline-file). the CLI provides two options for managing the baseline file: ### Option 1: `--writebaseline` (recommended) #### when not specified - if running locally, behaves the same as [`--baselinemode=auto`](#-baselinemodeauto) - if [running in CI](https://www.npmjs.com/package/is-ci), behaves the same as [`--baselinemode=lock`](#-baselinemodelock) #### when specified always updates the baseline file, even if new errors are added. ### Option 2: `--baselinemode` (experimental) !!! warning `--baselinemode` is an experimental feature and is subject to breaking changes in the future. if you have feedback for this feature, please [open an issue](https://github.com/DetachHead/basedpyright/issues/new/choose) !!! tip for most users, we don't recommend `--baselinemode` as the [`--writebaseline` flag](#option-1-writebaseline-recommended) is sufficient for most use cases. `--baselinemode` exists for users who want more control over how and when the baseline file is used. #### `--baselinemode=auto` only updates the baseline file if diagnostics have been removed and no new diagnostics have been added. !!! note this is the same as not specifying `--baselinemode` or `--writebaseline` when running locally. you only need to specify `--baselinemode=auto` explicitly if you want to [disable the default `--baselinemode=never` behavior in CI environments](#option-1-writebaseline-recommended) #### `--baselinemode=lock` never writes to the baseline file, even if no new diagnostics have surfaced, and instead exits with a non-zero exit code if the baseline file needs to be updated. useful in CI environments when you want to ensure that the baseline file is up-to-date. !!! note basedpyright [already defaults to this mode if running in CI](#option-1-writebaseline-recommended). you only need to specify this explicitly if your CI environment is not detected for some reason, or if you want this behavior locally (such as a [prek](https://github.com/j178/prek) hook). you can think of it like a [lockfile](https://docs.astral.sh/uv/concepts/projects/sync/#checking-the-lockfile). you don't want the baseline file to contain errors that don't exist anymore because: - when other contributors run basedpyright, it could update the baseline file to remove errors in code they have not changed, which is unexpected and leads to confusion - it would make it possible to unintentionally re-introduce the error in the future without it being flagged by basedpyright #### `--baselinemode=discard` reads the baseline file but never updates it, even if it needs to be updated. exits with code 0 unless new diagnostics have surfaced. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/configuration/comments.md # Comments Some behaviors of Pyright can be controlled through the use of comments within the source file. ## File-level Type Controls Strict type checking, where most supported type-checking switches generate errors, can be enabled for a file through the use of a special comment. Typically, this comment is placed at or near the top of a code file on its own line. ```python # pyright: strict ``` Likewise, basic type checking can be enabled for a file. If you use `# pyright: basic`, the settings for the file use the default “basic” settings, not any override settings specified in the configuration file or language server settings. You can override the basic default settings within the file by specifying them individually (see below). ```python # pyright: basic ``` Individual configuration settings can also be overridden on a per-file basis and optionally combined with “strict” or “basic” type checking. For example, if you want to enable all type checks except for “reportPrivateUsage”, you could add the following comment: ```python # pyright: strict, reportPrivateUsage=false ``` Diagnostic levels are also supported. ```python # pyright: reportPrivateUsage=warning, reportOptionalCall=error ``` ## Line-level Diagnostic Suppression PEP 484 defines a special comment `# type: ignore` that can be used at the end of a line to suppress all diagnostics emitted by a type checker on that line. Pyright supports this mechanism. This is disabled by default in basedpyright. [See below](#prefer-pyrightignore-comments) for more information. Pyright also supports a `# pyright: ignore` comment at the end of a line to suppress all Pyright diagnostics on that line. This can be useful if you use multiple type checkers on your source base and want to limit suppression of diagnostics to Pyright only. The `# pyright: ignore` comment accepts an optional list of comma-delimited diagnostic rule names surrounded by square brackets. If such a list is present, only diagnostics within those diagnostic rule categories are suppressed on that line. For example, `# pyright: ignore [reportPrivateUsage, reportGeneralTypeIssues]` would suppress diagnostics related to those two categories but no others. If the `reportUnnecessaryTypeIgnoreComment` configuration option is enabled, any unnecessary `# type: ignore` and `# pyright: ignore` comments will be reported so they can be removed. ### Prefer `# pyright:ignore` comments `# pyright:ignore` comments are preferred over `# type:ignore` comments because they are more strict than `# type:ignore` comments: - `# type:ignore` comments will always suppress all errors on the line, regardless of what diagnostic rules are specified in brackets. - `# type:ignore` comments are not checked to ensure that the specified rule is valid: ```py # No error here, even though you are suppressing an invalid diagnostic code. 1 + "" # type:ignore[asdf] ``` This decision was probably made to support other type checkers like mypy which use different codes to Pyright, but in that case, you should just disable `enableTypeIgnoreComments` to prevent Pyright from looking at them. In basedpyright, `enableTypeIgnoreComments` is disabled by default to avoid these issues. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/configuration/config-files.md ## Pyright Configuration basedpyright offers flexible configuration options specified in a JSON-formatted text configuration. By default, the file is called “pyrightconfig.json” and is located within the root directory of your project. Multi-root workspaces (“Add Folder to Workspace…”) are supported, and each workspace root can have its own “pyrightconfig.json” file. For a sample pyrightconfig.json file, see [below](../configuration/config-files.md#sample-config-file). basedpyright settings can also be specified in a `[tool.basedpyright]` section of a “pyproject.toml” file. A “pyrightconfig.json” file always takes precedent over “pyproject.toml” if both are present. For a sample pyproject.toml file, see [below](../configuration/config-files.md#sample-pyprojecttoml-file). !!! info the `[tool.pyright]` section in `pyproject.toml` is also supported for backwards compatibility with existing pyright configs. Relative paths specified within the config file are relative to the config file’s location. Paths with shell variables (including `~`) are not supported. Paths within a config file should generally be relative paths so the config file can be shared by other developers who contribute to the project. ## Environment Options The following settings control the *environment* in which basedpyright will check for diagnostics. These settings determine how Pyright finds source files, imports, and what Python version specific rules are applied. - **include** [array of paths, optional]: Paths of directories or files that should be considered part of the project. If no paths are specified, pyright defaults to the directory that contains the config file. Paths may contain wildcard characters ** (a directory or multiple levels of directories), * (a sequence of zero or more characters), or ? (a single character). If no include paths are specified, the root path for the workspace is assumed. - **exclude** [array of paths, optional]: Paths of directories or files that should not be considered part of the project. These override the directories and files that `include` matched, allowing specific subdirectories to be excluded. Note that files in the exclude paths may still be included in the analysis if they are referenced (imported) by source files that are not excluded. Paths may contain wildcard characters ** (a directory or multiple levels of directories), * (a sequence of zero or more characters), or ? (a single character). If no exclude paths are specified, Pyright automatically excludes the following: `**/node_modules`, `**/__pycache__`, `**/.*`. Pyright also excludes any virtual environment directories regardless of the exclude paths specified. For more detail on Python environment specification and discovery, refer to the [import resolution](../usage/import-resolution.md#configuring-your-python-environment) documentation. - **strict** [array of paths, optional]: Paths of directories or files that should use “strict” analysis if they are included. This is the same as manually adding a “# pyright: strict” comment. In strict mode, most type-checking rules are enabled. Refer to [this table](#diagnostic-settings-defaults) for details about which rules are enabled in strict mode. Paths may contain wildcard characters ** (a directory or multiple levels of directories), * (a sequence of zero or more characters), or ? (a single character). - **extends** [path, optional]: Path to another `.json` or `.toml` file that is used as a “base configuration”, allowing this configuration to inherit configuration settings. Top-level keys within this configuration overwrite top-level keys in the base configuration. Multiple levels of inheritance are supported. Relative paths specified in a configuration file are resolved relative to the location of that configuration file. - **defineConstant** [map of constants to values (boolean or string), optional]: Set of identifiers that should be assumed to contain a constant value wherever used within this program. For example, `{ "DEBUG": true }` indicates that pyright should assume that the identifier `DEBUG` will always be equal to `True`. If this identifier is used within a conditional expression (such as `if not DEBUG:`) pyright will use the indicated value to determine whether the guarded block is reachable or not. Member expressions that reference one of these constants (e.g. `my_module.DEBUG`) are also supported. - **typeshedPath** [path, optional]: Path to a directory that contains typeshed type stub files. Pyright ships with a bundled copy of typeshed type stubs. If you want to use a different version of typeshed stubs, you can clone the [typeshed github repo](https://github.com/python/typeshed) to a local directory and reference the location with this path. This option is useful if you’re actively contributing updates to typeshed. - **stubPath** [path, optional]: Path to a directory that contains custom type stubs. Each package's type stub file(s) are expected to be in its own subdirectory. The default value of this setting is "./typings". (typingsPath is now deprecated) - **verboseOutput** [boolean]: Specifies whether output logs should be verbose. This is useful when diagnosing certain problems like import resolution issues. - **extraPaths** [array of strings, optional]: Additional search paths that will be used when searching for modules imported by files. - **pythonVersion** [string, optional]: Specifies the version of Python that will be used to execute the source code. The version should be specified as a string in the format "M.m" where M is the major version and m is the minor (e.g. `"3.0"` or `"3.6"`). If a version is provided, pyright will generate errors if the source code makes use of language features that are not supported in that version. It will also tailor its use of type stub files, which conditionalizes type definitions based on the version. If no version is specified, pyright will use the version of the current python interpreter, if one is present. - **pythonPlatform** [string, optional]: Specifies the target platform that will be used to execute the source code. Should be one of `"Windows"`, `"Darwin"`, `"Linux"`, or `"All"`. If specified, pyright will tailor its use of type stub files, which conditionalize type definitions based on the platform. If no platform is specified, pyright will use the current platform. - **executionEnvironments** [array of objects, optional]: Specifies a list of execution environments (see [below](config-files.md#execution-environment-options)). Execution environments are searched from start to finish by comparing the path of a source file with the root path specified in the execution environment. - **useLibraryCodeForTypes** [boolean]: Determines whether pyright reads, parses and analyzes library code to extract type information in the absence of type stub files. Type information will typically be incomplete. We recommend using type stubs where possible. The default value for this option is true. ### basedpyright exclusive settings - **failOnWarnings** [boolean]: Whether to exit with a non-zero exit code in the CLI if any `"warning"` diagnostics are reported. Has no effect on the language server. This is equivalent to the `--warnings` CLI argument. - **allowedUntypedLibraries** [array of strings, optional]: Suppress issues related to unknown types when functions and classes are imported from certain modules. This affects the rules [`reportUnknownVariableType`](#reportUnknownVariableType), [`reportUnknownMemberType`](#reportUnknownMemberType), and [`reportMissingTypeStubs`](#reportMissingTypeStubs). The option name should be a list of module names, for example, `["library", "module.submodule"]`. By default, no modules are configured. - **baselineFile** [path, optional]: Path to a baseline file that contains a list of diagnostics that should be ignored. defaults to `./.basedpyright/baseline.json`. [more info](../benefits-over-pyright/baseline.md) ### Discouraged settings these settings are discouraged in basedpyright. [see here for more info](../benefits-over-pyright/better-defaults.md#default-value-for-pythonpath). - **venvPath** [path, optional]: Path to a directory containing one or more subdirectories, each of which contains a virtual environment. When used in conjunction with a **venv** setting (see below), pyright will search for imports in the virtual environment’s site-packages directory rather than the paths specified by the default Python interpreter. - **venv** [string, optional]: Used in conjunction with the venvPath, specifies the virtual environment to use. For more details, refer to the [import resolution](../usage/import-resolution.md#configuring-your-python-environment) documentation. ## Type Evaluation Settings The following settings determine how different types should be evaluated. - **strictListInference** [boolean]: When inferring the type of a list, use strict type assumptions. For example, the expression `[1, 'a', 3.4]` could be inferred to be of type `list[Any]` or `list[int | str | float]`. If this setting is true, it will use the latter (stricter) type. - **strictDictionaryInference** [boolean]: When inferring the type of a dictionary’s keys and values, use strict type assumptions. For example, the expression `{'a': 1, 'b': 'a'}` could be inferred to be of type `dict[str, Any]` or `dict[str, int | str]`. If this setting is true, it will use the latter (stricter) type. - **strictSetInference** [boolean]: When inferring the type of a set, use strict type assumptions. For example, the expression `{1, 'a', 3.4}` could be inferred to be of type `set[Any]` or `set[int | str | float]`. If this setting is true, it will use the latter (stricter) type. - **analyzeUnannotatedFunctions** [boolean]: Analyze and report errors for functions and methods that have no type annotations for input parameters or return types. - **strictParameterNoneValue** [boolean]: PEP 484 indicates that when a function parameter is assigned a default value of None, its type should implicitly be Optional even if the explicit type is not. When enabled, this rule requires that parameter type annotations use Optional explicitly in this case. - **deprecateTypingAliases** [boolean]: PEP 585 indicates that aliases to types in standard collections that were introduced solely to support generics are deprecated as of Python 3.9. This switch controls whether these are treated as deprecated. This applies only when pythonVersion is 3.9 or newer. - **enableExperimentalFeatures** [boolean]: Enables a set of experimental (mostly undocumented) features that correspond to proposed or exploratory changes to the Python typing standard. These features will likely change or be removed, so they should not be used except for experimentation purposes. - **disableBytesTypePromotions** [boolean]: Disables legacy behavior where `bytearray` and `memoryview` are considered subtypes of `bytes`. [PEP 688](https://peps.python.org/pep-0688/#no-special-meaning-for-bytes) deprecates this behavior, but this switch is provided to restore the older behavior. ### basedpyright exclusive settings - **strictGenericNarrowing** [boolean]: When a type is narrowed in such a way that its type parameters are not known (eg. using an `isinstance` check), basedpyright will resolve the type parameter to the generic's bound or constraint instead of `Any`. [more info](../benefits-over-pyright/improved-generic-narrowing.md) - **enableBasedFeatures** [boolean, optional]: Enable Basedpyright-specific features that are not officially supported in the python type system and can't be toggled via a diagnostic rule. You should keep this disabled if you're developing a library targeting users who may not be using basedpyright. This currently includes: - [Extra `dataclass_transform` features](../benefits-over-pyright/dataclass-transform.md) ### Discouraged settings there are options in pyright that are discouraged in basedpyright because we provide a better alternative. these options are still available for backwards compatibility, but you shouldn't use them. - **enableTypeIgnoreComments** [boolean]: PEP 484 defines support for `# type: ignore` comments. This switch enables or disables support for these comments. This option is discouraged in favor of `# pyright: ignore` comments in basedpyright, as they are safer. [See here](../benefits-over-pyright/new-diagnostic-rules.md#reportignorecommentwithoutrule) for more information. - **enableReachabilityAnalysis** [boolean]: If enabled, code that is determined to be unreachable by type analysis is reported using a tagged hint. This setting does not affect code that is determined to be unreachable independent of type analysis; such code is always reported as unreachable using a tagged hint. This setting also has no effect when using the command-line version of pyright because it never emits tagged hints for unreachable code. this rule is discouraged in basedpyright in favor of [`reportUnreachable`](../benefits-over-pyright/fixes-for-rules.md#reportunreachable). ## Diagnostic Categories diagnostics can be configured to be reported as any of the following categories: - `"error"` - causes the CLI to fail with exit code 1 - `"warning"` - only causes the CLI to fail if [`failOnWarnings`](#failOnWarnings) is enabled or the [`--warnings`](./command-line.md#command-line) argument is used - `"information"` - never causes the CLI to fail - `"hint"` - only appears as a hint in the language server, not reported in the CLI at all. [baselined diagnostics](../benefits-over-pyright/baseline.md) are reported as hints - `"none"` - disables the diagnostic entirely !!! info "deprecated diagnostic categories" as of basedpyright 1.21.0, the `"unreachable"`, `"unused"` and `"deprecated"` diagnostic categories are deprecated in favor of `"hint"`. rules where it makes sense to report them as "unnecessary" or "deprecated" [as mentioned in the LSP spec](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#diagnosticSeverity) are still reported as such, the configuration to do so has just been simplified. the `"hint"` diagnostic category is more flexible as it can be used on rules that don't refer to something that's unused, unreachable or deprecated. [baselined diagnostics](../benefits-over-pyright/baseline.md) are now all reported as hints, even ones that don't support diagnostic tags. for backwards compatibility, setting a diagnostic rule to any of these three deprecated categories will act as an alias for the `"hint"` category, however they may be removed entirely in a future release. ## Type Check Diagnostics Settings The following settings control pyright’s diagnostic output (warnings or errors). - **typeCheckingMode** ["off", "basic", "standard", "strict", "recommended", "all"]: Specifies the default rule set to use. Some rules can be overridden using additional configuration flags documented below. The default value for this setting is "recommended". If set to "off", all type-checking rules are disabled, but Python syntax and semantic errors are still reported. - **ignore** [array of paths, optional]: Paths of directories or files whose diagnostic output (errors and warnings) should be suppressed even if they are an included file or within the transitive closure of an included file. Paths may contain wildcard characters ** (a directory or multiple levels of directories), * (a sequence of zero or more characters), or ? (a single character). This setting can be overridden using the [language server settings](./language-server-settings.md). ### Type Check Rule Overrides The following settings allow more fine grained control over the **typeCheckingMode**. Unless otherwise specified, each diagnostic setting can specify a boolean value (`false` indicating that no error is generated and `true` indicating that an error is generated). Alternatively, a string value of `"none"`, `"hint"`, `"warning"`, `"information"`, or `"error"` can be used to specify the diagnostic level. [see above for more information](#diagnostic-categories) - **reportGeneralTypeIssues** [boolean or string, optional]: Generate or suppress diagnostics for general type inconsistencies, unsupported operations, argument/parameter mismatches, etc. This covers all of the basic type-checking rules not covered by other rules. It does not include syntax errors. - **reportPropertyTypeMismatch** [boolean or string, optional]: Generate or suppress diagnostics for properties where the type of the value passed to the setter is not assignable to the value returned by the getter. Such mismatches violate the intended use of properties, which are meant to act like variables. - **reportFunctionMemberAccess** [boolean or string, optional]: Generate or suppress diagnostics for non-standard member accesses for functions. - **reportMissingImports** [boolean or string, optional]: Generate or suppress diagnostics for imports that have no corresponding imported python file or type stub file. - **reportMissingModuleSource** [boolean or string, optional]: Generate or suppress diagnostics for imports that have no corresponding source file. This happens when a type stub is found, but the module source file was not found, indicating that the code may fail at runtime when using this execution environment. Type checking will be done using the type stub. - **reportInvalidTypeForm** [boolean or string, optional]: Generate or suppress diagnostics for type annotations that use invalid type expression forms or are semantically invalid. - **reportMissingTypeStubs** [boolean or string, optional]: Generate or suppress diagnostics for imports that have no corresponding type stub file (either a typeshed file or a custom type stub). The type checker requires type stubs to do its best job at analysis. - **reportImportCycles** [boolean or string, optional]: Generate or suppress diagnostics for cyclical import chains. These are not errors in Python, but they do slow down type analysis and often hint at architectural layering issues. Generally, they should be avoided. - **reportUnusedImport** [boolean or string, optional]: Generate or suppress diagnostics for an imported symbol that is not referenced within that file. - **reportUnusedClass** [boolean or string, optional]: Generate or suppress diagnostics for a class with a private name (starting with an underscore) that is not accessed. - **reportUnusedFunction** [boolean or string, optional]: Generate or suppress diagnostics for a function or method with a private name (starting with an underscore) that is not accessed. - **reportUnusedVariable** [boolean or string, optional]: Generate or suppress diagnostics for a variable that is not accessed. - **reportDuplicateImport** [boolean or string, optional]: Generate or suppress diagnostics for an imported symbol or module that is imported more than once. - **reportWildcardImportFromLibrary** [boolean or string, optional]: Generate or suppress diagnostics for a wildcard import from an external library. The use of this language feature is highly discouraged and can result in bugs when the library is updated. - **reportAbstractUsage** [boolean or string, optional]: Generate or suppress diagnostics for the attempted instantiate an abstract or protocol class or use of an abstract method. - **reportArgumentType** [boolean or string, optional]: Generate or suppress diagnostics for argument type incompatibilities when evaluating a call expression. - **reportAssertTypeFailure** [boolean or string, optional]: Generate or suppress diagnostics for a type mismatch detected by the `typing.assert_type` call. - **reportAssignmentType** [boolean or string, optional]: Generate or suppress diagnostics for assignment type incompatibility. - **reportAttributeAccessIssue** [boolean or string, optional]: Generate or suppress diagnostics related to attribute accesses. - **reportCallIssue** [boolean or string, optional]: Generate or suppress diagnostics related to call expressions and arguments passed to a call target. - **reportInconsistentOverload** [boolean or string, optional]: Generate or suppress diagnostics for an overloaded function that has overload signatures that are inconsistent with each other or with the implementation. - **reportIndexIssue** [boolean or string, optional]: Generate or suppress diagnostics related to index operations and expressions. - **reportInvalidTypeArguments** [boolean or string, optional]: Generate or suppress diagnostics for invalid type argument usage. - **reportNoOverloadImplementation** [boolean or string, optional]: Generate or suppress diagnostics for an overloaded function or method if the implementation is not provided. - **reportOperatorIssue** [boolean or string, optional]: Generate or suppress diagnostics related to the use of unary or binary operators (like `*` or `not`). - **reportOptionalSubscript** [boolean or string, optional]: Generate or suppress diagnostics for an attempt to subscript (index) a variable with an Optional type. - **reportOptionalMemberAccess** [boolean or string, optional]: Generate or suppress diagnostics for an attempt to access a member of a variable with an Optional type. - **reportOptionalCall** [boolean or string, optional]: Generate or suppress diagnostics for an attempt to call a variable with an Optional type. - **reportOptionalIterable** [boolean or string, optional]: Generate or suppress diagnostics for an attempt to use an Optional type as an iterable value (e.g. within a `for` statement). - **reportOptionalContextManager** [boolean or string, optional]: Generate or suppress diagnostics for an attempt to use an Optional type as a context manager (as a parameter to a `with` statement). - **reportOptionalOperand** [boolean or string, optional]: Generate or suppress diagnostics for an attempt to use an Optional type as an operand to a unary operator (like `~`) or the left-hand operator of a binary operator (like `*` or `<<`). - **reportRedeclaration** [boolean or string, optional]: Generate or suppress diagnostics for a symbol that has more than one type declaration. - **reportReturnType** [boolean or string, optional]: Generate or suppress diagnostics related to function return type compatibility. - **reportTypedDictNotRequiredAccess** [boolean or string, optional]: Generate or suppress diagnostics for an attempt to access a non-required field within a TypedDict without first checking whether it is present. - **reportUntypedFunctionDecorator** [boolean or string, optional]: Generate or suppress diagnostics for function decorators that have no type annotations. These obscure the function type, defeating many type analysis features. - **reportUntypedClassDecorator** [boolean or string, optional]: Generate or suppress diagnostics for class decorators that have no type annotations. These obscure the class type, defeating many type analysis features. - **reportUntypedBaseClass** [boolean or string, optional]: Generate or suppress diagnostics for base classes whose type cannot be determined statically. These obscure the class type, defeating many type analysis features. - **reportUntypedNamedTuple** [boolean or string, optional]: Generate or suppress diagnostics when “namedtuple” is used rather than “NamedTuple”. The former contains no type information, whereas the latter does. - **reportPrivateUsage** [boolean or string, optional]: Generate or suppress diagnostics for incorrect usage of private or protected variables or functions. Protected class members begin with a single underscore (“_”) and can be accessed only by subclasses. Private class members begin with a double underscore but do not end in a double underscore and can be accessed only within the declaring class. Variables and functions declared outside of a class are considered private if their names start with either a single or double underscore, and they cannot be accessed outside of the declaring module. - **reportTypeCommentUsage** [boolean or string, optional]: Prior to Python 3.5, the grammar did not support type annotations, so types needed to be specified using “type comments”. Python 3.5 eliminated the need for function type comments, and Python 3.6 eliminated the need for variable type comments. Future versions of Python will likely deprecate all support for type comments. If enabled, this check will flag any type comment usage unless it is required for compatibility with the specified language version. - **reportPrivateImportUsage** [boolean or string, optional]: Generate or suppress diagnostics for use of a symbol from a third party "py.typed" module that is not meant to be exported from that module. - **reportConstantRedefinition** [boolean or string, optional]: Generate or suppress diagnostics for attempts to redefine variables whose names are all-caps with underscores and numerals. - **reportDeprecated** [boolean or string, optional]: Generate or suppress diagnostics for use of a class or function that has been marked as deprecated. - **reportIncompatibleMethodOverride** [boolean or string, optional]: Generate or suppress diagnostics for methods that override a method of the same name in a base class in an incompatible manner (wrong number of parameters, incompatible parameter types, or incompatible return type). - **reportIncompatibleVariableOverride** [boolean or string, optional]: Generate or suppress diagnostics for class variable declarations that override a symbol of the same name in a base class with a type that is incompatible with the base class symbol type. - **reportInconsistentConstructor** [boolean or string, optional]: Generate or suppress diagnostics when an `__init__` method signature is inconsistent with a `__new__` signature. - **reportOverlappingOverload** [boolean or string, optional]: Generate or suppress diagnostics for function overloads that overlap in signature and obscure each other or have incompatible return types. - **reportPossiblyUnboundVariable** [boolean or string, optional]: Generate or suppress diagnostics for variables that are possibly unbound on some code paths. - **reportMissingSuperCall** [boolean or string, optional]: Generate or suppress diagnostics for `__init__`, `__init_subclass__`, `__enter__` and `__exit__` methods in a subclass that fail to call through to the same-named method on a base class. - **reportUninitializedInstanceVariable** [boolean or string, optional]: Generate or suppress diagnostics for instance variables within a class that are not initialized or declared within the class body or the `__init__` method. - **reportInvalidStringEscapeSequence** [boolean or string, optional]: Generate or suppress diagnostics for invalid escape sequences used within string literals. The Python specification indicates that such sequences will generate a syntax error in future versions. - **reportUnknownParameterType** [boolean or string, optional]: Generate or suppress diagnostics for input or return parameters for functions or methods that have an unknown type. - **reportUnknownArgumentType** [boolean or string, optional]: Generate or suppress diagnostics for call arguments for functions or methods that have an unknown type. - **reportUnknownLambdaType** [boolean or string, optional]: Generate or suppress diagnostics for input or return parameters for lambdas that have an unknown type. - **reportUnknownVariableType** [boolean or string, optional]: Generate or suppress diagnostics for variables that have an unknown type. - **reportUnknownMemberType** [boolean or string, optional]: Generate or suppress diagnostics for class or instance variables that have an unknown type. - **reportMissingParameterType** [boolean or string, optional]: Generate or suppress diagnostics for input parameters for functions or methods that are missing a type annotation. The `self` and `cls` parameters used within methods are exempt from this check. - **reportMissingTypeArgument** [boolean or string, optional]: Generate or suppress diagnostics when a generic class is used without providing explicit or implicit type arguments. - **reportInvalidTypeVarUse** [boolean or string, optional]: Generate or suppress diagnostics when a TypeVar is used inappropriately (e.g. if a TypeVar appears only once) within a generic function signature. - **reportCallInDefaultInitializer** [boolean or string, optional]: Generate or suppress diagnostics for function calls, list expressions, set expressions, or dictionary expressions within a default value initialization expression. Such calls can mask expensive operations that are performed at module initialization time. - **reportUnnecessaryIsInstance** [boolean or string, optional]: Generate or suppress diagnostics for `isinstance` or `issubclass` calls where the result is statically determined to be always true or always false. Such calls are often indicative of a programming error. - **reportUnnecessaryCast** [boolean or string, optional]: Generate or suppress diagnostics for `cast` calls that are statically determined to be unnecessary. Such calls are sometimes indicative of a programming error. - **reportUnnecessaryComparison** [boolean or string, optional]: Generate or suppress diagnostics for `==` or `!=` comparisons or other conditional expressions that are statically determined to always evaluate to False or True. Such comparisons are sometimes indicative of a programming error. Also reports `case` clauses in a `match` statement that can be statically determined to never match (with exception of the `_` wildcard pattern if it's used to explicitly assert that the case is unreachable). - **reportUnnecessaryContains** [boolean or string, optional]: Generate or suppress diagnostics for `in` operations that are statically determined to always evaluate to False or True. Such operations are sometimes indicative of a programming error. - **reportAssertAlwaysTrue** [boolean or string, optional]: Generate or suppress diagnostics for `assert` statement that will always succeed because its first argument is a parenthesized tuple (for example, `assert (v > 0, "Bad value")` when the intent was `assert v > 0, "Bad value"`). This is a common programming error. - **reportSelfClsParameterName** [boolean or string, optional]: Generate or suppress diagnostics for a missing or misnamed “self” parameter in instance methods and “cls” parameter in class methods. Instance methods in metaclasses (classes that derive from “type”) are allowed to use “cls” for instance methods. - **reportImplicitStringConcatenation** [boolean or string, optional]: Generate or suppress diagnostics for two or more string literals that follow each other, indicating an implicit concatenation. This is considered a bad practice and often masks bugs such as missing commas. - **reportUndefinedVariable** [boolean or string, optional]: Generate or suppress diagnostics for undefined variables. - **reportUnboundVariable** [boolean or string, optional]: Generate or suppress diagnostics for unbound variables. - **reportUnhashable** [boolean or string, optional]: Generate or suppress diagnostics for the use of an unhashable object in a container that requires hashability. The default value for this setting is `"error"`. - **reportInvalidStubStatement** [boolean or string, optional]: Generate or suppress diagnostics for statements that are syntactically correct but have no purpose within a type stub file. - **reportIncompleteStub** [boolean or string, optional]: Generate or suppress diagnostics for a module-level `__getattr__` call in a type stub file, indicating that it is incomplete. - **reportUnsupportedDunderAll** [boolean or string, optional]: Generate or suppress diagnostics for statements that define or manipulate `__all__` in a way that is not allowed by a static type checker, thus rendering the contents of `__all__` to be unknown or incorrect. Also reports names within the `__all__` list that are not present in the module namespace. - **reportUnusedCallResult** [boolean or string, optional]: Generate or suppress diagnostics for call statements whose return value is not used in any way and is not None. - **reportUnusedCoroutine** [boolean or string, optional]: Generate or suppress diagnostics for call statements whose return value is not used in any way and is a Coroutine. This identifies a common error where an `await` keyword is mistakenly omitted. - **reportUnusedExcept** [boolean or string, optional]: Generate or suppress diagnostics for an `except` clause that will never be reached. - **reportUnusedExpression** [boolean or string, optional]: Generate or suppress diagnostics for simple expressions whose results are not used in any way. - **reportUnnecessaryTypeIgnoreComment** [boolean or string, optional]: Generate or suppress diagnostics for a `# type: ignore` or `# pyright: ignore` comment that would have no effect if removed. - **reportMatchNotExhaustive** [boolean or string, optional]: Generate or suppress diagnostics for a `match` statement that does not provide cases that exhaustively match against all potential types of the target expression. - **reportUnreachable** [boolean or string, optional]: Generate or suppress diagnostics for code that is determined to be structurally unreachable or unreachable by type analysis. - **reportImplicitOverride** [boolean or string, optional]: Generate or suppress diagnostics for overridden methods in a class that are missing an explicit `@override` decorator. ### basedpyright exclusive settings - **reportAny** [boolean or string, optional]: Generate or suppress diagnostics for expressions that have the `Any` type. this accounts for all scenarios not covered by the `reportUnknown*` rules (since "Unknown" isn't a real type, but a distinction pyright makes to disallow the `Any` type only in certain circumstances). [more info](../benefits-over-pyright/new-diagnostic-rules.md#reportany) - **reportExplicitAny** [boolean or string, optional]: Ban all explicit usages of the `Any` type. While `reportAny` bans expressions typed as `Any`, this rule bans using the `Any` type directly eg. in a type annotation. [more info](../benefits-over-pyright/new-diagnostic-rules.md#reportexplicitany) - **reportIgnoreCommentWithoutRule** [boolean or string, optional]: Enforce that all `# type:ignore`/`# pyright:ignore` comments specify a rule in brackets (eg. `# pyright:ignore[reportAny]`). [more info](../benefits-over-pyright/new-diagnostic-rules.md#reportignorecommentwithoutrule) - **reportPrivateLocalImportUsage** [boolean or string, optional]: Generate or suppress diagnostics for use of a symbol from a local module that is not meant to be exported from that module. Like `reportPrivateImportUsage` but also checks imports from your own code. [more info](../benefits-over-pyright/new-diagnostic-rules.md#reportprivatelocalimportusage) - **reportImplicitRelativeImport** [boolean or string, optional]: Generate or suppress diagnostics for non-relative imports that do not specify the full path to the module. [more info](../benefits-over-pyright/new-diagnostic-rules.md#reportimplicitrelativeimport) - **reportInvalidCast** [boolean or string, optional]: Generate or suppress diagnostics for `cast`s to non-overlapping types. [more info](../benefits-over-pyright/new-diagnostic-rules.md#reportinvalidcast) - **reportUnsafeMultipleInheritance** [boolean or string, optional]: Generate or suppress diagnostics for classes that inherit from multiple base classes with an `__init__` or `__new__` method, which is unsafe because those additional constructors may either never get called or get called with invalid arguments. [more info](../benefits-over-pyright/new-diagnostic-rules.md#reportunsafemultipleinheritance) - **reportUnusedParameter** [boolean or string, optional]: Generate or suppress diagnostics for unused function parameters. - **reportImplicitAbstractClass** [boolean or string, optional]: Diagnostics for classes that extend abstract classes without also explicitly declaring themselves as abstract or implementing all of the required abstract methods. [more info](../benefits-over-pyright/new-diagnostic-rules.md#reportimplicitabstractclass) - **reportIncompatibleUnannotatedOverride** [boolean or string, optional]: Generate or suppress diagnostics for class variable declarations that override a symbol of the same name in a base class with a type that is incompatible with the base class symbol type, when the base class' symbol does not have a type annotation. [more info](../benefits-over-pyright/new-diagnostic-rules.md#reportincompatibleunannotatedoverride) - **reportUnannotatedClassAttribute** [boolean or string, optional]: Generate or suppress diagnostics for class attribute declarations that do not have a type annotation. These are unsafe unless `reportIncompatibleUnannotatedOverride` is enabled. [more info](../benefits-over-pyright/new-diagnostic-rules.md#reportunannotatedclassattribute) - **reportInvalidAbstractMethod** [boolean or string, optional]: Generate or suppress diagnostics for usages of `@abstractmethod` on a non-abstract class. [more info](../benefits-over-pyright/new-diagnostic-rules.md#reportinvalidabstractmethod) - **reportSelfClsDefault** [boolean or string, optional]: Generate or suppress diagnostics for a class or instance method having a default value for the first parameter. ## Execution Environment Options Pyright allows multiple “execution environments” to be defined for different portions of your source tree. For example, a subtree may be designed to run with different import search paths or a different version of the python interpreter than the rest of the source base. The following settings can be specified for each execution environment. Each source file within a project is associated with at most one execution environment -- the first one whose root directory contains that file. - **root** [string, required]: Root path for the code that will execute within this execution environment. - **extraPaths** [array of strings, optional]: Additional search paths (in addition to the root path) that will be used when searching for modules imported by files within this execution environment. If specified, this overrides the default extraPaths setting when resolving imports for files within this execution environment. Note that each file’s execution environment mapping is independent, so if file A is in one execution environment and imports a second file B within a second execution environment, any imports from B will use the extraPaths in the second execution environment. - **pythonVersion** [string, optional]: The version of Python used for this execution environment. If not specified, the global `pythonVersion` setting is used instead. - **pythonPlatform** [string, optional]: Specifies the target platform that will be used for this execution environment. If not specified, the global `pythonPlatform` setting is used instead. In addition, any of the [type check diagnostics settings](config-files.md#type-check-diagnostics-settings) listed above can be specified. These settings act as overrides for the files in this execution environment. ## Sample Config File The following is an example of a pyright config file: ```json title="pyrightconfig.json" { "include": [ "src" ], "exclude": [ "**/node_modules", "**/__pycache__", "src/experimental", "src/typestubs" ], "ignore": [ "src/oldstuff" ], "defineConstant": { "DEBUG": true }, "stubPath": "src/stubs", "reportMissingImports": "error", "reportMissingTypeStubs": false, "pythonVersion": "3.6", "pythonPlatform": "Linux", "executionEnvironments": [ { "root": "src/web", "pythonVersion": "3.5", "pythonPlatform": "Windows", "extraPaths": [ "src/service_libs" ], "reportMissingImports": "warning" }, { "root": "src/sdk", "pythonVersion": "3.0", "extraPaths": [ "src/backend" ] }, { "root": "src/tests", "reportPrivateUsage": false, "extraPaths": [ "src/tests/e2e", "src/sdk" ] }, { "root": "src" } ] } ``` ## Sample pyproject.toml File ```toml title="pyproject.toml" [tool.basedpyright] include = ["src"] exclude = ["**/node_modules", "**/__pycache__", "src/experimental", "src/typestubs" ] ignore = ["src/oldstuff"] defineConstant = { DEBUG = true } stubPath = "src/stubs" reportMissingImports = "error" reportMissingTypeStubs = false pythonVersion = "3.6" pythonPlatform = "Linux" executionEnvironments = [ { root = "src/web", pythonVersion = "3.5", pythonPlatform = "Windows", extraPaths = [ "src/service_libs" ], reportMissingImports = "warning" }, { root = "src/sdk", pythonVersion = "3.0", extraPaths = [ "src/backend" ] }, { root = "src/tests", reportPrivateUsage = false, extraPaths = ["src/tests/e2e", "src/sdk" ]}, { root = "src" } ] ``` ## Diagnostic Settings Defaults Each diagnostic setting has a default that is dictated by the specified type checking mode. The default for each rule can be overridden in the configuration file or settings. Some rules default to `"hint"`. This diagnostic category is only used by the language server so that your editor can grey out or add a strikethrough to the symbol, which you can disable by setting it to `"off"`. it does not effect the outcome when running basedpyright via the CLI, so in that context these severity levels essentially mean the same thing as `"off"`. [see here](#diagnostic-categories) for more information about each diagnostic category. The following table lists the default severity levels for each diagnostic rule within each type checking mode (`"off"`, `"basic"`, `"standard"`, `"strict"`, `"recommended"` and `"all"`). ### `"recommended"` and `"all"` basedpyright introduces two new diagnostic rulesets in addition to the ones in pyright: `"recommended"` and `"all"`. `"recommended"` enables all diagnostic rules as either `"warning"` or `"error"`, but sets `failOnWarnings` to `true` so that all diagnostics will still cause a non-zero exit code when run in the CLI. this means `"recommended"` is essentially the same as `"all"`, but makes it easier to differentiate errors that are likely to cause a runtime crash like an undefined variable from less serious warnings such as a missing type annotation. !!! note some settings which are enabled by default in pyright are disabled by default in basedpyright (even when `typeCheckingMode` is `"all"`). this is because these rules are [discouraged](#discouraged-settings), but in the interest of backwards compatibility with pyright, they remain available to any users who still want to use them. {{ generate_diagnostic_rule_table() }} ## Overriding language server settings If a `pyproject.toml` (with a `basedpyright` or `pyright` section) or a `pyrightconfig.json` exists, any [discouraged language server settings](./language-server-settings.md#discouraged-settings) (eg. in a VS Code `settings.json`) will be ignored. `pyrightconfig.json` is prescribing the environment to be used for a particular project. Changing the environment configuration options per user is not supported. If a `pyproject.toml` (with a `basedpyright` or `pyright` section) or a `pyrightconfig.json` does not exist, then the language server settings apply. for more information about why this is the case, [see here](./language-server-settings.md#discouraged-settings). ## Locale Configuration Pyright provides diagnostic messages that are translated to multiple languages, which are improved in basedpyright thanks to [community-contributed translations](../development/localization.md). By default, basedpyright uses the default locale of the operating system. You can override the desired locale through the use of one of the following environment variables, listed in priority order. ``` LC_ALL="de" LC_MESSAGES="en-us" LANG="zh_CN" LANGUAGE="fr" ``` The locale specifiers can be `xx-xx` or `xx_XX` in basedpyright. The latter form is used in unix-like platforms, which is not supported in pyright. When running in VS Code, the editor's locale takes precedence. Setting these environment variables applies only when using pyright outside of VS Code. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/configuration/language-server-settings.md # Language Server Settings ## settings !!! info "for users migrating from pyright or pylance" with the exception of `python.pythonPath` and `python.venvPath`, settings prefixed with `python.*` are not supported in basedpyright. use `basedpyright.*` instead. The basedpyright language server honors the following settings. **basedpyright.disableLanguageServices** [boolean]: Disables all language services. This includes hover text, type completion, signature completion, find definition, find references, etc. This option is useful if you want to use pyright only as a type checker but want to run another Python language server for language service features. **basedpyright.disableOrganizeImports** [boolean]: Disables the “Organize Imports” command. This is useful if you are using another extension that provides similar functionality and you don’t want the two extensions to fight each other. **basedpyright.disableTaggedHints** [boolean]: Disables the use of hint diagnostics with special tags to tell the client to display text ranges in a "grayed out" manner (to indicate unreachable code or unreferenced symbols) or in a "strike through" manner (to indicate use of a deprecated feature). **basedpyright.openFilesOnly** [boolean]: Determines whether pyright analyzes (and reports errors for) all files in the workspace, as indicated by the config file. If this option is set to true, pyright analyzes only open files. This setting is deprecated in favor of basedpyright.analysis.diagnosticMode. It will be removed at a future time. **basedpyright.useLibraryCodeForTypes** [boolean]: This setting is deprecated in favor of basedpyright.analysis.useLibraryCodeForTypes. It will be removed at a future time. **basedpyright.analysis.autoImportCompletions** [boolean]: Determines whether pyright offers auto-import completions. !!! note this setting is only for import suggestion [*completions*](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#textDocument_completion), which are displayed as you type. it does not affect [import suggestion *code actions*](../benefits-over-pyright/pylance-features.md#import-suggestions), which are tied to the [`reportUndefinedVariable`](./config-files.md#reportUndefinedVariable) diagnostic rule. to disable those, you must disable the diagnostic rule itself. **basedpyright.analysis.autoSearchPaths** [boolean]: Determines whether pyright automatically adds common search paths like "src" if there are no execution environments defined in the config file. **basedpyright.analysis.diagnosticMode** ["openFilesOnly", "workspace"]: Determines whether pyright analyzes (and reports errors for) all files in the workspace, as indicated by the config file. If this option is set to "openFilesOnly", pyright analyzes only open files. Defaults to "openFilesOnly" **basedpyright.analysis.logLevel** ["Error", "Warning", "Information", or "Trace"]: Level of logging for Output panel. The default value for this option is "Information". **python.pythonPath** [path]: Path to Python interpreter. if you're using vscode, this setting is being deprecated by the VS Code Python extension in favor of a setting that is stored in the Python extension’s internal configuration store. Pyright supports both mechanisms but prefers the new one if both settings are present. **python.venvPath** [path]: Path to folder with subdirectories that contain virtual environments. The `python.pythonPath` setting is recommended over this mechanism for most users. For more details, refer to the [import resolution](../usage/import-resolution.md#configuring-your-python-environment) documentation. !!! note `python.venvPath` is discouraged in basedpyright. [more info](../benefits-over-pyright/better-defaults.md#default-value-for-pythonpath) ### based settings the following settings are exclusive to basedpyright **basedpyright.analysis.inlayHints.variableTypes** [boolean]: Whether to show inlay hints on assignments to variables. Defaults to `true`: ![](inlayHints.variableTypes.png) **basedpyright.analysis.inlayHints.callArgumentNames** [boolean]: Whether to show inlay hints on function arguments. Defaults to `true`: ![](inlayHints.callArgumentNames.png) **basedpyright.analysis.inlayHints.callArgumentNamesMatching** [boolean]: Whether to show inlay hints on function arguments when the input expression is a variable with the same name as the parameter. Defaults to `false`. **basedpyright.analysis.inlayHints.functionReturnTypes** [boolean]: Whether to show inlay hints on function return types. Defaults to `true`: ![](inlayHints.functionReturnTypes.png) **basedpyright.analysis.inlayHints.genericTypes** [boolean]: Whether to show inlay hints on inferred generic types. Defaults to `true`: ![](inlayHints.genericTypes.png) **basedpyright.analysis.useTypingExtensions** [boolean]: Whether to rely on imports from the `typing_extensions` module when targeting older versions of python that do not include certain typing features such as the `@override` decorator. Defaults to `false`. [more info](../benefits-over-pyright/language-server-improvements.md#autocomplete-improvements) **basedpyright.analysis.fileEnumerationTimeout** [integer]: Timeout (in seconds) for file enumeration operations. When basedpyright scans your workspace files, it can take a long time in some workspaces. This setting controls when to show a "slow enumeration" warning. Default is 10 seconds. **basedpyright.analysis.autoFormatStrings** [boolean]: Whether to automatically insert an `f` in front of a string when typing a `{` inside it. Defaults to `true`. [more info](../benefits-over-pyright/pylance-features.md#automatic-conversion-to-f-string-when-typing-inside-a-string) **basedpyright.analysis.configFilePath** [path]: Path to the directory or file containing the Pyright configuration (`pyrightconfig.json` or `pyproject.toml`). If a directory is specified, basedpyright will search for the config file in that directory. This is useful for monorepo structures where the config file is in a subdirectory rather than the workspace root. For example, if your Python code is in a `backend/` subdirectory with its own `pyproject.toml`, you can set this to `${workspaceFolder}/backend` to make basedpyright use that configuration file instead of searching from the workspace root. **basedpyright.analysis.baselineMode** ["auto", "discard"]: Controls how the baseline file is updated when files are saved. Use `auto` to automatically remove fixed errors from the baseline (default), or `discard` to prevent automatic updates. [more info](../benefits-over-pyright/baseline.md) ### discouraged settings these options can also be configured [using a config file](./config-files.md). it's recommended to use either a `pyproject.toml` or `pyrightconfig.json` file instead of the language server to configure type checking for the following reasons: - the config should be the same for everybody working on your project. you should commit the config file so that other contributors don't have to manually configure their language server to match yours. - it ensures that the basedpyright language server behaves the same as the `basedpyright` CLI, which is useful if you have [your CI configured to type check your code](../benefits-over-pyright/improved-ci-integration.md) (you should!) however these settings are still supported to maintain compatibility with pyright. **basedpyright.analysis.diagnosticSeverityOverrides** [map]: Allows a user to override the severity levels for individual diagnostic rules. "reportXXX" rules in the type check diagnostics settings in [configuration](config-files.md#type-check-diagnostics-settings) are supported. Use the rule name as a key and one of "error," "warning," "information," "true," "false," or "none" as value. **basedpyright.analysis.exclude** [array of paths]: Paths of directories or files that should not be included. This can be overridden in the configuration file. **basedpyright.analysis.extraPaths** [array of paths]: Paths to add to the default execution environment extra paths if there are no execution environments defined in the config file. **basedpyright.analysis.ignore** [array of paths]: Paths of directories or files whose diagnostic output (errors and warnings) should be suppressed. This can be overridden in the configuration file. **basedpyright.analysis.include** [array of paths]: Paths of directories or files that should be included. This can be overridden in the configuration file. **basedpyright.analysis.stubPath** [path]: Path to directory containing custom type stub files. **basedpyright.analysis.typeCheckingMode** ["off", "basic", "standard", "strict", "recommended", "all"]: Determines the default type-checking level used by pyright. This can be overridden in the configuration file. (Note: This setting used to be called "basedpyright.typeCheckingMode". The old name is deprecated but is still currently honored.) **basedpyright.analysis.typeshedPaths** [array of paths]: Paths to look for typeshed modules. Pyright currently honors only the first path in the array. **basedpyright.analysis.useLibraryCodeForTypes** [boolean]: Determines whether pyright reads, parses and analyzes library code to extract type information in the absence of type stub files. Type information will typically be incomplete. We recommend using type stubs where possible. The default value for this option is true. #### basedpyright exclusive settings as mentioned [above](#discouraged-settings), it's recommended to configure these settings [using a config file](./config-files.md) instead. **basedpyright.analysis.baselineFile** [path]: Path to a baseline file that contains a list of diagnostics that should be ignored. defaults to `./.basedpyright/baseline.json`. [more info](../benefits-over-pyright/baseline.md) ## where do i configure these settings? the way you configure the basedpyright language server depends on your IDE. below are some examples for [some of the supported editors](../installation/ides.md). this is not a comprehensive list, so if your editor is missing, consult the documentation for its language server support. ### vscode / vscodium the basedpyright language server settings can be configured using a workspace or global `settings.json`: ```json title="./.vscode/settings.json" { "basedpyright.analysis.diagnosticMode": "openFilesOnly" } ``` For monorepo projects where your Python code is in a subdirectory: ```json title="./.vscode/settings.json" { "basedpyright.analysis.configFilePath": "${workspaceFolder}/backend" } ``` ### neovim The language server can be configured in your neovim settings: For Neovim 0.11+ ```lua title="lsp/basedpyright.lua" return { settings = { basedpyright = { analysis = { diagnosticMode = "openFilesOnly", inlayHints = { callArgumentNames = true } } } } } ``` For monorepo projects where your Python code is in a subdirectory: ```lua title="lsp/basedpyright.lua" return { settings = { basedpyright = { analysis = { configFilePath = vim.fn.getcwd() .. "/backend" } } } } ``` For Neovim 0.10 (legacy) ```lua require("lspconfig").basedpyright.setup { settings = { basedpyright = { analysis = { diagnosticMode = "openFilesOnly", inlayHints = { callArgumentNames = true } } } } } ``` ### helix ```toml title="languages.toml" [language-server.basedpyright] command = "basedpyright-langserver" args = ["--stdio"] [language-server.basedpyright.config] basedpyright.analysis.diagnosticMode = "openFilesOnly" ``` ### zed ```json { "languages": { "Python": { "language_servers": ["basedpyright"] } }, "lsp": { "basedpyright": { "settings": { "python": { "pythonPath": ".venv/bin/python" }, "basedpyright.analysis": { "diagnosticMode": "openFilesOnly" } } } } } ``` ### emacs #### eglot ```lisp (use-package eglot :ensure t :config (add-to-list 'eglot-server-programs '( (python-mode python-ts-mode) "basedpyright-langserver" "--stdio" )) (setq-default eglot-workspace-configuration '(:basedpyright ( :typeCheckingMode "recommended" ) :basedpyright.analysis ( :diagnosticSeverityOverrides ( :reportUnusedCallResult "none" ) :inlayHints ( :callArgumentNames :json-false ) ))) ) ``` --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/development/contributing.md # Contributing ## Github Issues unlike the upstream pyright repo, we are very open to ideas for improvements and bug reports. if you've raised an issue on the upstream pyright repo that was closed without a solution, feel free to [raise it again here](https://github.com/DetachHead/basedpyright/issues/new). please upvote issues you find important using the 👍 react. we don't close issues that don't get enough upvotes or anything like that. this is just to help us prioritize which issues to address first. ## Building although pyright is written in typescript, in basedpyright we've made improvements to the developer experience for python developers who are not familiar with typescript/nodejs. you should be able to work on basedpyright without ever having to install nodejs yourself. the node installation is instead managed by a [pypi package](https://pypi.org/project/nodejs-wheel/) and installed to the project's virtualenv. the only thing you need to have installed already is python (any version >=3.9 should work) we recommend using vscode, as there are project configuration files in the repo that set everything up correctly (linters/formatters/debug configs, etc). 1. hit `F1` > `Tasks: Run task` > `install dependencies`, or run the following command: ``` ./pw uv sync ``` this will install all dependencies required for the project (pyprojectx, uv, node, typescript, etc.). all dependencies are installed locally to `./.venv` and `./node_modules` 2. press "Yes" when prompted by vscode to use the newly created virtualenv you can now run any node/npm commands from inside the venv. ## Debugging !!! note these instructions assume you are using VSCode/VSCodium. if you are using another editor, npm tasks can be run via the command line with `npm run script-name`. you can view all the available scripts in the root `./package.json`, but VSCode-specific debug configs will be unavailable. ### CLI To debug pyright, open the root source directory within VS Code. Open the debug sub-panel and choose “Pyright CLI” from the debug target menu. Click on the green “run” icon or press F5 to build and launch the command-line version in the VS Code debugger. There's also a similar option that provides a slightly faster build/debug loop: make sure you've built the pyright-internal project e.g. with Terminal > Run Build Task > tsc: watch, then choose “Pyright CLI (pyright-internal)”. ### VSCode extension To debug the VS Code extension, select “Pyright extension” from the debug target menu. Click on the green “run” icon or press F5 to build and launch a second copy of VS Code with the extension. Within the second VS Code instance, open a python source file so the pyright extension is loaded. Return to the first instance of VS Code and select “Pyright extension attach server” from the debug target menu and click the green “run” icon. This will attach the debugger to the process that hosts the type checker. You can now set breakpoints, etc. To debug the VS Code extension in watch mode, you can do the above, but select “Pyright extension (watch mode)”. When pyright's source is saved, an incremental build will occur, and you can either reload the second VS Code window or relaunch it to start using the updated code. Note that the watcher stays open when debugging stops, so you may need to stop it (or close VS Code) if you want to perform packaging steps without the output potentially being overwritten. !!! tip "inspecting LSP messages" while the VSCode extension is running in this mode, you can run the `npm: lsp-inspect` task to launch the [LSP inspector](https://lsp-devtools.readthedocs.io/en/latest/lsp-devtools/guide/inspect-command.html), which allows you to browse all messages sent between the client and server: ![](./lspinspector.svg) note that this does not work on windows, see [this issue](https://github.com/swyddfa/lsp-devtools/issues/125). as a workaround you can use [the client](#language-server) instead. ### Language server you may want to debug the language server without the VSCode extension, which can be useful when investigating issues that only seem to occur in other editors. you can do this using [LSP-inspector](https://github.com/swyddfa/lsp-devtools)'s client by running the `npm: lsp-client` task or the "LSP client" launch config !!! note "for windows users" the npm script will not work if run from VSCode's task runner. use the launch config instead. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/development/internals.md # Internals ## Code Structure * `packages/vscode-pyright/src/extension.ts`: Language Server Protocol (LSP) client entry point for VS Code extension. * `packages/pyright-internal/typeshed-fallback/`: Recent copy of Typeshed type stub files for Python stdlib * `packages/pyright-internal/src/pyright.ts`: Main entry point for command-line tool * `packages/pyright-internal/src/server.ts`: Main entry point for LSP server * `packages/pyright-internal/src/analyzer`: Modules that perform analysis passes over Python parse tree * `packages/pyright-internal/src/common`: Modules that are common to the parser and analyzer * `packages/pyright-internal/src/parser`: Modules that perform tokenization and parsing of Python source * `packages/pyright-internal/src/tests`: Tests for the parser and analyzer * `packages/pyright`: basedpyright npm package (used internally by the pypi package) * `packages/browser-pyright`: basedpyright build that can run in a browser (used internally by [the playground](https://basedpyright.com)) * `basedpyright`: pypi package wrapper for the npm package, so that users don't need to install nodejs themselves * `docstubs`: stubs with [docstrings on compiled modules](../benefits-over-pyright/pylance-features.md#docstrings-for-compiled-builtin-modules) (generated from `packages/pyright-internal/typeshed-fallback/` when building the pypi package) ## Core Concepts Pyright implements a [service](https://github.com/microsoft/pyright/blob/main/packages/pyright-internal/src/analyzer/service.ts), a persistent in-memory object that controls the order of analysis and provides an interface for the language server. For multi-root workspaces, each workspace gets its own service instance. The service owns an instance of a [program](https://github.com/microsoft/pyright/blob/main/packages/pyright-internal/src/analyzer/program.ts), which tracks the configuration file and all of the source files that make up the source base that is to be analyzed. A source file can be added to a program if it is a) referenced by the config file, b) currently open in the editor, or c) imported directly or indirectly by another source file. The program object is responsible for setting up file system watchers and updating the program as files are added, deleted, or edited. The program is also responsible for prioritizing all phases of analysis for all files, favoring files that are open in the editor (and their import dependencies). The program tracks multiple [sourceFile](https://github.com/microsoft/pyright/blob/main/packages/pyright-internal/src/analyzer/sourceFile.ts) objects. Each source file represents the contents of one Python source file on disk. It tracks the status of analysis for the file, including any intermediate or final results of the analysis and the diagnostics (errors and warnings) that result. The program makes use of an [importResolver](https://github.com/microsoft/pyright/blob/main/packages/pyright-internal/src/analyzer/importResolver.ts) to resolve the imported modules referenced within each source file. ## Analysis Phases Pyright performs the following analysis phases for each source file. The [tokenizer](https://github.com/microsoft/pyright/blob/main/packages/pyright-internal/src/parser/tokenizer.ts) is responsible for converting the file’s string contents into a stream of tokens. White space, comments, and some end-of-line characters are ignored, as they are not needed by the parser. The [parser](https://github.com/microsoft/pyright/blob/main/packages/pyright-internal/src/parser/parser.ts) is responsible for converting the token stream into a parse tree. A generalized [parseTreeWalker](https://github.com/microsoft/pyright/blob/main/packages/pyright-internal/src/analyzer/parseTreeWalker.ts) provides a convenient way to traverse the parse tree. All subsequent analysis phases utilize the parseTreeWalker. The [binder](https://github.com/microsoft/pyright/blob/main/packages/pyright-internal/src/analyzer/binder.ts) is responsible for building scopes and populating the symbol table for each scope. It does not perform any type checking, but it detects and reports some semantic errors that will result in unintended runtime exceptions. It also detects and reports inconsistent name bindings (e.g. a variable that uses both a global and nonlocal binding in the same scope). The binder also builds a “reverse code flow graph” for each scope, allowing the type analyzer to determine a symbol’s type at any point in the code flow based on its antecedents. The [checker](https://github.com/microsoft/pyright/blob/main/packages/pyright-internal/src/analyzer/checker.ts) is responsible for checking all of the statements and expressions within a source file. It relies heavily on the [typeEvaluator](https://github.com/microsoft/pyright/blob/main/packages/pyright-internal/src/analyzer/typeEvaluator.ts) module, which performs most of the heavy lifting. The checker doesn’t run on all files, only those that require full diagnostic output. For example, if a source file is not part of the program but is imported by the program, the checker doesn’t need to run on it. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/development/localization.md # Localization notes the translations in pyright come from microsoft's localization team, who are not programmers. this not only results in poor quality translations, but microsoft also doesn't accept contributions to fix them ([more info here](https://github.com/microsoft/pyright/issues/7441#issuecomment-1987027067)). in basedpyright we want to fix this but we need your help! if you are fluent in a language other than english and would like to contribute, it would be greatly appreciated. Here are some guidelines for contributors who would like to help improve the translations in basedpyright. ## Tools for localization A TUI tool `build/py_latest/localization_helper.py` script is provided to help you with the localization process. It can be used to: - Check every message in comparison with the corresponding English message. - Compare message keys with the English version and find out which messages are missing and which ones are redundant. ### Usage Run the script: ```shell # use uv ./pw uv run build/py_latest/localization_helper.py # or from inside the venv npm run localization-helper ``` About the interface: - The TUI is created with [textual](https://github.com/Textualize/textual), and it is generally fine to use it with mouse. - Click on the tabs for the given language at the top of the interface to switch to the localized content in the corresponding language. - Click on the function buttons at the bottom of the interface to carry out the corresponding operations. - Click on a category in the message tree to expand/collapse the content under that category. - Click on a message entry in the message tree to prompt the corresponding English entry automatically. !!! note If the operation "Compare message keys differences" is triggered, the popup can ONLY be closed by pressing C. ## General guidelines - Do not use any automatic translation tools. - In cases where a word is difficult to translate but it refers to the name of a symbol, leaving it in English and using backticks makes more sense than attempting to translate it. For example, if the English error message says "cannot assign to Final variable" then the translation could just be "невозможно присвоить переменной `Final`" instead of trying to use a Russian word for "final". - basedpyright is maintained by developers who only speak English, so it would be helpful if you could get someone who also speaks your language to review your changes as well. - new rules that are specific to basedpyright currently do not have any translations at all (see [this issue](https://github.com/DetachHead/basedpyright/issues/81)). Providing translations for those would be greatly appreciated. - The initial translations from Pyright seem to be pretty low quality and low consistency. If you want to start a "renovation" for a particular language, it's a good idea to come up with a glossary of common terms so that the final translations will be consistent. Check if the https://docs.python.org has a translation for your language, and if it does, use that as a baseline. - Some messages in the English localization file contain a `"comment"` field with formal specifiers like `Locked` and `StrContains`. Use your judgement as to whether they should be followed. It's usually better to apply common sense and the language-specific glossary to decide to which parts of the string should be translated. ## Specific languages ### Russian #### Style guide - Буква Ё/ё не используется - Знак препинания `;` заменяется на точку - Не стоит в точности повторять структуру английского сообщения, если она неестественна для русского языка. В частности, избегайте конструкции `X является Y` (`класс Foo является абстрактным` -> `класс Foo абстрактный`) - По возможности делайте структуру сообщения проще #### Глоссарий | English term | Canonical translation | | -------------------------- | -------------------------------------------------- | | @final / final class | `@final` (как есть) | | async function | асинхронная функция | | awaitable | awaitable (нет перевода) или: поддерживающий await | | complex [number] | комплексное число | | comprehension | включение | | dataclass, data class | датакласс | | XYZ is deprecated | XYZ [больше] не рекомендуется | | Enum | перечисление | | f-string | f-строка | | Final | `Final` (как есть) | | format string literal | f-строка | | generic | обобщенный | | generic type alias | обобщенный псевдоним типа | | keyword argument/parameter | именованный аргумент/параметр | | keyword-only | (исключительно) именованный | | mapping | mapping (нет перевода) | | positional-only | (исключительно) позиционный | | set (built-in type) | множество | | tuple | кортеж | | type alias | псевдоним типа | | type annotation | аннотация типа | | type variable | переменная типа | ### Chinese #### Style guide - 风格调整参考了该项目 [sparanoid/chinese-copywriting-guidelines](https://github.com/sparanoid/chinese-copywriting-guidelines),该项目提供了一些中文文案排版的基本规范,以便提高文案的可读性。 - 通常在中英文混排时,中文与英文之间需要添加空格,以增加可读性。在原始翻译中,这一规则仅在部分文本中得到了遵循,因此对其进行了调整。并且通过引号括起的文本和参数文本也遵循这一规则,在两侧添加空格以强调其内容。 - 原始翻译中存在中文的全角标点符号和英文的半角标点符号混用,同时存在使用不正确,因此对其进行调整。考虑到文本格式化时部分条目会出现**硬编码(Hard-coded)**的直引号 `"`,因此将所有格式化参数的双引号统一为英文直双引号,代码标识统一为反引号。除引号以外的非英文之间的标点全部统一为全角标点符号。 - 考虑到英文原文可能不符合中文的表达逻辑,因此允许不完全遵循原文,以符合中文理解习惯和语法为主,但仍需保证正确性。 - Python 官方文档的部分译文也存在一致性问题,需要酌情选取广泛使用的译文。 - 若翻译后文本并不是常见写法,可添加括号并标注原文。 - 在英文原文中使用的名词复数形式保留原词不译时: - 若复数形式是术语的一部分或影响语义(如专有名词、代码标识),保留复数; - 若复数无特殊含义或中文可通过量词/语境隐含复数,改为单数形式。 - 对于某些冗余表述(包括但不限于以下内容),可以酌情省略翻译: - `formatted string literals`:f-string 本身不存在 non-literal 形式,可直接译作“格式化字符串”或直接使用“f-string”; - `object of type None`:`None` 作为特殊的可字面注解的单例,可直接缩略为 `None`。 #### 用词调整 | 原词 (Word) | 原始翻译 (Original) | 调整翻译 (Adjusted) | 错译类型 (Type of Mistranslation) | | ---------------------------- | --------------------- | ------------------- | ------------------------------------------ | | annotation | (类型)批注 | (类型)注解 | 与文档不一致/Inconsistent with Python docs | | Any | 任意 | Any | 固定术语/Terminology | | Unknown | 未知/Unknown | 未知 | 固定术语/Terminology | | import | 导入/Import | 导入 | 语义错误/Wrong meaning | | True | true/True | True | 语义错误/Wrong meaning | | assign | 分配 | 赋值 | 词义错误/Wrong meaning | | follow | 遵循 | 在 ... 之后 | 词义错误/Wrong meaning | | variance | 差异 | 变异性 | 词义错误/Wrong meaning | | key | 密钥 | 键 | 词义错误/Wrong meaning | | argument | 参数 | 参数/实参/传入值 | 词义不准确/Inaccurate meaning | | parameter | 参数 | 参数/形参 | 词义不准确/Inaccurate meaning | | implementation/unimplemented | (未)实施/实行(的) | (未)实现(的) | 词义不准确/Inaccurate meaning | | obscure | 遮盖/隐蔽 | 遮蔽 | 词义不准确/Inaccurate meaning | | irrefutable | 无可辩驳的 | 无条件匹配(的) | 词义不准确/Inaccurate meaning | | entry | 条目 | 项 | 词义不准确/Inaccurate meaning | #### 量词 - 对于“数量”“个数”等离散计数的表述,在数值后通常需要添加量词“个”; - 对于“长度”“大小”等在习惯上非离散计数的表述,在数值后一般不添加量词“个”。 --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/development/upstream.md # how we keep up-to-date with upstream every time pyright releases a new version, we merge its release tag into basedpyright. each basedpyright version is based on a release version of pyright. you can check which pyright version basedpyright is based on by running `basedpyright --version`: ``` basedpyright 1.14.0 based on pyright 1.1.372 ``` we try to update basedpyright as soon as a new pyright version is released. we typically release on the same day that pyright does. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/getting_started/features.md ## Pyright Features ### Speed Pyright is a fast type checker meant for large Python source bases. It can run in a “watch” mode and performs fast incremental updates when files are modified. ### Configurability Pyright supports [configuration files](../configuration/config-files.md) that provide granular control over settings. Different “execution environments” can be associated with subdirectories within a source base. Each environment can specify different module search paths, python language versions, and platform targets. ### Type Checking Features * [PEP 484](https://www.python.org/dev/peps/pep-0484/) type hints including generics * [PEP 487](https://www.python.org/dev/peps/pep-0487/) simpler customization of class creation * [PEP 526](https://www.python.org/dev/peps/pep-0526/) syntax for variable annotations * [PEP 544](https://www.python.org/dev/peps/pep-0544/) structural subtyping * [PEP 561](https://www.python.org/dev/peps/pep-0561/) distributing and packaging type information * [PEP 563](https://www.python.org/dev/peps/pep-0563/) postponed evaluation of annotations * [PEP 570](https://www.python.org/dev/peps/pep-0570/) position-only parameters * [PEP 585](https://www.python.org/dev/peps/pep-0585/) type hinting generics in standard collections * [PEP 586](https://www.python.org/dev/peps/pep-0586/) literal types * [PEP 589](https://www.python.org/dev/peps/pep-0589/) typed dictionaries * [PEP 591](https://www.python.org/dev/peps/pep-0591/) final qualifier * [PEP 593](https://www.python.org/dev/peps/pep-0593/) flexible variable annotations * [PEP 604](https://www.python.org/dev/peps/pep-0604/) complementary syntax for unions * [PEP 612](https://www.python.org/dev/peps/pep-0612/) parameter specification variables * [PEP 613](https://www.python.org/dev/peps/pep-0613/) explicit type aliases * [PEP 635](https://www.python.org/dev/peps/pep-0635/) structural pattern matching * [PEP 646](https://www.python.org/dev/peps/pep-0646/) variadic generics * [PEP 647](https://www.python.org/dev/peps/pep-0647/) user-defined type guards * [PEP 655](https://www.python.org/dev/peps/pep-0655/) required typed dictionary items * [PEP 673](https://www.python.org/dev/peps/pep-0673/) Self type * [PEP 675](https://www.python.org/dev/peps/pep-0675/) arbitrary literal strings * [PEP 681](https://www.python.org/dev/peps/pep-0681/) dataclass transform * [PEP 692](https://www.python.org/dev/peps/pep-0692/) TypedDict for kwargs typing * [PEP 695](https://www.python.org/dev/peps/pep-0695/) type parameter syntax * [PEP 696](https://www.python.org/dev/peps/pep-0696/) type defaults for TypeVarLikes * [PEP 698](https://www.python.org/dev/peps/pep-0698/) override decorator for static typing * [PEP 702](https://www.python.org/dev/peps/pep-0702/) marking deprecations * [PEP 705](https://www.python.org/dev/peps/pep-0705/) TypedDict: read-only items * [PEP 728](https://www.python.org/dev/peps/pep-0728/) TypedDict with typed extra items * [PEP 742](https://www.python.org/dev/peps/pep-0742/) narrowing types with TypeIs * [PEP 746](https://www.python.org/dev/peps/pep-0746/) (experimental) type checking annotated metadata * [PEP 747](https://www.python.org/dev/peps/pep-0747/) (experimental) annotating type forms * [PEP 764](https://www.python.org/dev/peps/pep-0764/) (experimental) inline typed dictionaries * Type inference for function return values, instance variables, class variables, and globals * Type guards that understand conditional code flow constructs like if/else statements ### Language Server Support Pyright ships as both a command-line tool and a language server that provides many powerful features that help improve programming efficiency. * Intelligent type completion of keywords, symbols, and import names appears when editing * Import statements are automatically inserted when necessary for type completions * Signature completion tips help when filling in arguments for a call * Hover over symbols to provide type information and doc strings * Find Definitions to quickly go to the location of a symbol’s definition * Find References to find all references to a symbol within a code base * Rename Symbol to rename all references to a symbol within a code base * Find Symbols within the current document or within the entire workspace * View call hierarchy information — calls made within a function and places where a function is called * Organize Imports command for automatically ordering imports according to PEP8 rules * Type stub generation for third-party libraries ### Built-in Type Stubs Pyright includes a recent copy of the stdlib type stubs from [Typeshed](https://github.com/python/typeshed). It can be configured to use another (perhaps more recent or modified) copy of the Typeshed type stubs. Of course, it also works with custom type stub files that are part of your project. ## Limitations Pyright provides support for Python 3.0 and newer. There are no plans to support older versions. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/getting_started/getting-started.md ## Getting Started with Type Checking A static type checker like Pyright can add incremental value to your source code as more type information is provided. Here is a typical progression: ### 1. Initial Type Checking * Install pyright (either the language server or command-line tool). * Write a minimal `pyrightconfig.json` that defines `include` entries. Place the config file in your project’s top-level directory and commit it to your repo. Alternatively, you can add a pyright section to a `pyproject.toml` file. For additional details and a sample config file, refer to [this documentation](../configuration/config-files.md). * Run pyright over your source base with the default settings. Fix any errors and warnings that it emits. Optionally disable specific diagnostic rules if they are generating too many errors. They can be re-enabled at a later time. ### 2. Types For Imported Libraries * Update dependent libraries to recent versions. Many popular libraries have recently added inlined types, which eliminates the need to install or create type stubs. * Enable the `reportMissingTypeStubs` setting in the config file and add (minimal) type stub files for the imported files. You may wish to create a stubs directory within your code base — a location for all of your custom type stub files. Configure the “stubPath” config entry to refer to this directory. * Look for type stubs for the packages you use. Some package authors opt to ship stubs as a separate companion package named that has “-stubs” appended to the name of the original package. * In cases where type stubs do not yet exist for a package you are using, consider creating a custom type stub that defines the portion of the interface that your source code consumes. Check in your custom type stub files and configure pyright to run as part of your continuous integration (CI) environment to keep the project “type clean”. ### 3. Incremental Typing * Incrementally add type annotations to your code files. The annotations that provide most value are on function input parameters, instance variables, and return parameters (in that order). * Enable stricter type checking options like "reportUnknownParameterType", and "reportUntypedFunctionDecorator". ### 4. Strict Typing * On a file-by-file basis, enable all type checking options by adding the comment `# pyright: strict` somewhere in the file. * Optionally add entire subdirectories to the `strict` config entry to indicate that all files within those subdirectories should be strictly typed. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/getting_started/type-concepts.md ## Static Typing: The Basics Getting started with static type checking in Python is easy, but it’s important to understand a few simple concepts. In addition to the documentation below, you may also find the community-maintained [Static Typing Documentation](https://typing.readthedocs.io/en/latest/) to be of use. That site also includes the official [Specification for the Python Type System](https://typing.readthedocs.io/en/latest/spec/index.html). ### Type Declarations When you add a type annotation to a variable or a parameter in Python, you are _declaring_ that the symbol will be assigned values that are compatible with that type. You can think of type annotations as a powerful way to comment your code. Unlike text-based comments, these comments are readable by both humans and enforceable by type checkers. If a variable or parameter has no type annotation, Pyright will assume that any value can be assigned to it. ### Type Assignability When your code assigns a value to a symbol (in an assignment expression) or a parameter (in a call expression), the type checker first determines the type of the value being assigned. It then determines whether the target has a declared type. If so, it verifies that the type of the value is _assignable_ to the declared type. Let’s look at a few simple examples. In this first example, the declared type of `a` is `float`, and it is assigned a value that is an `int`. This is permitted because `int` is assignable to `float`. ```python a: float = 3 ``` In this example, the declared type of `b` is `int`, and it is assigned a value that is a `float`. This is flagged as an error because `float` is not assignable to `int`. ```python b: int = 3.4 # Error ``` This example introduces the notion of a _Union type_, which specifies that a value can be one of several distinct types. A union type can be expressed using the `|` operator to combine individual types. ```python c: int | float = 3.4 c = 5 c = a c = b c = None # Error c = "" # Error ``` This example introduces the _Optional_ type, which is the same as a union with `None`. ```python d: Optional[int] = 4 d = b d = None d = "" # Error ``` Those examples are straightforward. Let’s look at one that is less intuitive. In this example, the declared type of `f` is `list[int | None]`. A value of type `list[int]` is being assigned to `f`. As we saw above, `int` is assignable to `int | None`. You might therefore assume that `list[int]` is assignable to `list[int | None]`, but this is an incorrect assumption. To understand why, we need to understand generic types and type arguments. ```python e: list[int] = [3, 4] f: list[int | None] = e # Error ``` ### Generic Types A _generic type_ is a class that is able to handle different types of inputs. For example, the `list` class is generic because it is able to operate on different types of elements. The type `list` by itself does not specify what is contained within the list. Its element type must be specified as a _type argument_ using the indexing (square bracket) syntax in Python. For example, `list[int]` denotes a list that contains only `int` elements whereas `list[int | float]` denotes a list that contains a mixture of int and float elements. We noted above that `list[int]` is not assignable to `list[int | None]`. Why is this the case? Consider the following example. ```python my_list_1: list[int] = [1, 2, 3] my_list_2: list[int | None] = my_list_1 # Error my_list_2.append(None) for elem in my_list_1: print(elem + 1) # Runtime exception ``` The code is appending the value `None` to the list `my_list_2`, but `my_list_2` refers to the same object as `my_list_1`, which has a declared type of `list[int]`. The code has violated the type of `my_list_1` because it no longer contains only `int` elements. This broken assumption results in a runtime exception. The type checker detects this broken assumption when the code attempts to assign `my_list_1` to `my_list_2`. `list` is an example of a _mutable container type_. It is mutable in that code is allowed to modify its contents — for example, add or remove items. The type parameters for mutable container types are typically marked as _invariant_, which means that an exact type match is enforced. This is why the type checker reports an error when attempting to assign a `list[int]` to a variable of type `list[int | None]`. Most mutable container types also have immutable counterparts. | Mutable Type | Immutable Type | | ----------------- | -------------- | | list | Sequence | | dict | Mapping | | set | Container | | n/a | tuple | Switching from a mutable container type to a corresponding immutable container type is often an effective way to resolve type errors relating to assignability. Let’s modify the example above by changing the type annotation for `my_list_2`. ```python my_list_1: list[int] = [1, 2, 3] my_list_2: Sequence[int | None] = my_list_1 # No longer an error ``` The type error on the second line has now gone away. For more details about generic types, type parameters, and invariance, refer to [PEP 483 — The Theory of Type Hints](https://www.python.org/dev/peps/pep-0483/). ### Debugging Types When you want to know the type that the type checker has evaluated for an expression, you can use the special `reveal_type()` function: ```python x = 1 reveal_type(x) # Type of "x" is "Literal[1]" ``` This function is always available and does not need to be imported. When you use Pyright within an IDE, you can also simply hover over an identifier to see its evaluated type. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/index.md

basedpyright

--8<-- "README.md:header" 🛝 [Playground](http://basedpyright.com)
- :material-microsoft-visual-studio-code:{ .lg .middle } **Pylance features in any editor** *** basedpyright re-implements many features exclusive to pylance - microsoft's closed-source extension that can't be used outside of vscode. [:octicons-arrow-right-24: more info](./benefits-over-pyright/pylance-features.md) - :simple-pypi:{ .lg .middle } **Easy to install & pin** *** unlike pyright, basedpyright can be installed from PyPI without having to install NodeJS. the VSCode extension uses the same version so you never see different errors in your editor vs. the CLI [:octicons-arrow-right-24: more info](./benefits-over-pyright/pypi-package-vscode-pinning.md) - :octicons-alert-16:{ .lg .middle } **Strict by default & new type checking rules** *** basedpyright introduces many new diagnostic rules to detect potentially serious issues in your code that pyright won't catch, and [all rules are enabled by default](./benefits-over-pyright/better-defaults.md) for maximum discoverability. [:octicons-arrow-right-24: more info](./benefits-over-pyright/new-diagnostic-rules.md) - :octicons-checklist-16:{ .lg .middle } **Baseline support** *** adopt basedpyright's stricter type checking rules effortlessly in an existing project. no need to update any of your old code [:octicons-arrow-right-24: more info](./benefits-over-pyright/baseline.md) - :octicons-sync-16:{ .lg .middle } **Up-to-date** *** when a new version of pyright is released, we merge it and release a new version of basedpyright within a day [:octicons-arrow-right-24: more info](./development/upstream.md) - :octicons-issue-reopened-16:{ .lg .middle } **Open to feedback** *** we listen to user feedback. if you encounter any problems or have an idea for a new feature, don't hesitate to open an issue. [:octicons-arrow-right-24: issue tracker](https://github.com/DetachHead/basedpyright/issues)
see the [Benefits over Pyright](./benefits-over-pyright/new-diagnostic-rules.md) section for a comprehensive list of new features and improvements we've made to pyright. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/installation/command-line-and-language-server.md # Command-line & language server ## pypi package unlike pyright, the basedpyright CLI and language server are available as a [pypi package](https://pypi.org/project/basedpyright/). this makes it far more convenient for python developers to use, since there's no need to install any additional tools. just install it normally via your package manager of choice: === "uv (recommended)" add it to your project's dev dependencies (recommended): ``` uv add --dev basedpyright ``` or install it globally: ``` uv tool install basedpyright ``` === "pip" ``` pip install basedpyright ``` ## other installation methods the basedpyright CLI & language server is also available outside of pypi: === "conda" ``` conda install conda-forge::basedpyright ``` === "homebrew" ``` brew install basedpyright ``` === "nixOS" [see here](https://search.nixos.org/packages?channel=unstable&show=basedpyright) === "npm" ``` npm install basedpyright ``` note that we recommend installing basedpyright via pypi instead - [see here for more information](https://www.npmjs.com/package/basedpyright?activeTab=readme). the basedpyright npm package is intended for users who are unable to use the pypi package for some reason. for example if you're using an operating system not [supported by nodejs-wheel](https://github.com/njzjz/nodejs-wheel?tab=readme-ov-file#available-builds) or a version of python older than 3.8. ## usage once installed, the `basedpyright` and `basedpyright-langserver` scripts will be available in your python environment. when running basedpyright via the command line, use the `basedpyright` command: ```shell basedpyright --help ``` for instructions on how to use `basedpyright-langserver`, see the [IDE-specific instructions](./ides.md). --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/installation/ides.md # IDEs !!! info note that most of these editor plugins require [the language server to be installed](./command-line-and-language-server.md). for information on how to configure the language server in your IDE, [see here](../configuration/language-server-settings.md#where-do-i-configure-these-settings). ## VSCode / VSCodium === "VSCode" install the extension from [the vscode extension marketplace](https://marketplace.visualstudio.com/items?itemName=detachhead.basedpyright) ??? "using basedpyright with pylance (not recommended)" unless you depend on any pylance-exclusive features that haven't yet been re-implemented in basedpyright, it's recommended to disable/uninstall the pylance extension. if you do want to continue using pylance, all of the options and commands in basedpyright have been renamed to avoid any conflicts with the pylance extension. basedpyright will automatically disable pylance if it's installed. however if you would like to use basedpyright's diagnostics with pylance's language server, you can change the following settings: ```json title=".vscode/settings.json" { // disable pylance's type checking and only use its language server "python.analysis.typeCheckingMode": "off", "python.languageServer": "Pylance", // disable basedpyright's language server and only use its type checking "basedpyright.disableLanguageServices": true } ``` ??? warning "if using basedpyright without the microsoft python extension" If `basedpyright` is installed within a virtual environment and the official Python extension ([`ms-python`](https://marketplace.visualstudio.com/items?itemName=ms-python.python)) is not installed, the VSCode extension will crash on load. This is a known issue ([#1188](https://github.com/detachhead/basedpyright/issues/1188)) because the automatic python interpreter detection is provided only by `ms-python`. The `basedpyright` VSCode extension by design does not depend explicitly on `ms-python`, due to concerns about telemetry. There are two workarounds for this problem: - Manually install `ms-python` - Set `basedpyright.importStrategy` to `useBundled` in your `.vscode/settings.json` === "VSCodium" install the extension from [the open VSX registry](https://open-vsx.org/extension/detachhead/basedpyright) !!! warning "if using basedpyright without the microsoft python extension" If `basedpyright` is installed within a virtual environment and the official Python extension ([`ms-python`](https://marketplace.visualstudio.com/items?itemName=ms-python.python)) is not installed, the VSCode extension will crash on load. This is a known issue ([#1188](https://github.com/detachhead/basedpyright/issues/1188)) because the automatic python interpreter detection is provided only by `ms-python`. The `basedpyright` VSCode extension by design does not depend explicitly on `ms-python`, due to concerns about telemetry. There are two workarounds for this problem: - Manually install `ms-python` - Set `basedpyright.importStrategy` to `useBundled` in your `.vscode/settings.json` the basedpyright extension will automatically look for the pypi package in your python environment. !!! tip "pinning basedpyright as a development dependency to your project (recommended)" we recommend adding it to the recommended extensions list in your workspace: ```json title=".vscode/extensions.json" { "recommendations": ["detachhead.basedpyright"] } ``` you should commit this file so that it prompts others working on your repo to install the extension as well. ## Neovim You need to install the LSP client adapter plugin, [nvim-lspconfig](https://github.com/neovim/nvim-lspconfig), for setting up the LSP for the editor. These configurations are for launching the LSP server, as well as for being able to give launching parameters at the same time. To install the **necessary sever command**, for the LSP server itself, use the [pypi package installation method](./command-line-and-language-server.md) (as mentioned previously in this section). Or if already using [Mason.nvim](https://github.com/williamboman/mason.nvim), follow their instructions for installing their packages. The latter approach allows you to have the version of BasedPyright maintained and upgraded by Mason project. ### Setting-up Neovim BasedPyright is available through the [`nvim-lspconfig`](https://github.com/neovim/nvim-lspconfig/blob/master/doc/configs.md#basedpyright) adapter for native Neovim's LSP support. After having both, the client-side plugin and the LSP server command installed, simply add this settings to your Neovim's settings: For Neovim 0.11+ ```lua vim.lsp.enable("basedpyright") ``` For Neovim 0.10 (legacy) ```lua local lspconfig = require("lspconfig") lspconfig.basedpyright.setup{} ``` Further info for this LSP server options for `nvim-lspconfig` are available on their docs, linked above. ## Vim Vim users can install [coc-basedpyright](https://github.com/fannheyward/coc-basedpyright), the BasedPyright extension for coc.nvim. ## Sublime Text Sublime text users can install both [LSP](https://packagecontrol.io/packages/LSP) and [LSP-basedpyright](https://packagecontrol.io/packages/LSP-basedpyright) via [Package Control](https://packagecontrol.io). ## Emacs Emacs users have 3 options: === "lsp-bridge" basedpyright is the default language server for python in [lsp-bridge](https://github.com/manateelazycat/lsp-bridge), so no additional configuration is required. === "eglot" add the following to your emacs config: ```emacs-lisp (add-to-list 'eglot-server-programs '((python-mode python-ts-mode) "basedpyright-langserver" "--stdio")) ``` === "lsp-mode" with [lsp-pyright](https://github.com/emacs-lsp/lsp-pyright) (any commit after: `0c0d72a`, update the package if you encounter errors), add the following to your emacs config: ```emacs-lisp (setq lsp-pyright-langserver-command "basedpyright") ``` ## PyCharm === "PyCharm Community" 1. install the [LSP4IJ](https://plugins.jetbrains.com/plugin/23257-lsp4ij) plugin 2. install the [Pyright](https://plugins.jetbrains.com/plugin/24145) plugin 3. configure it to use basedpyright by specifying `basedpyright-langserver` as the executable and check the "Resolve against interpreter directory, ignoring extension" checkbox:\ ![](./pycharm-lsp-exe.png) 4. set "Running mode" to "LSP4IJ":\ ![](./lsp4ij.png) === "PyCharm Professional / IntelliJ IDEA Ultimate" 1. install the [Pyright](https://plugins.jetbrains.com/plugin/24145) plugin 3. configure it to use basedpyright by specifying `basedpyright-langserver` as the executable and check the "Resolve against interpreter directory, ignoring extension" checkbox:\ ![](./pycharm-lsp-exe.png) 3. set "Running mode" to "Native LSP client":\ ![](./native-lsp.png) !!! tip "pinning basedpyright as a development dependency to your project (recommended)" we recommend configuring these settings as overrides in the "Appearance & Behavior > Required Plugins" menu, and configuring the pyright plugin (and LSP4IJ if using pycharm community) as a recommended dependency: ![](pycharm-recommended-dependency.png) you should then commit the following generated config files, so that others working on your repo are prompted to install the plugin and don't have to manually configure it themselves: - `.idea/pyright-overrides.xml` - `.idea/pyright.xml` - `.idea/externalDependencies.xml` (note that pycharm hides the `.idea` directory by default, so you will need to `git add` the files via the CLI instead.) ## Helix Install the LSP server itself, using the [pypi package installation method](./command-line-and-language-server.md) (as mentioned previously in this section). Then add the following to your [languages file](https://docs.helix-editor.com/languages.html): ```toml [[language]] name = "python" language-servers = [ "basedpyright" ] ``` You can verify the active configuration by running `hx --health python` ## Zed basedpyright is the default language server for python in zed. [see the docs](https://zed.dev/docs/languages/python#basedpyright) for more information. !!! tip "pinning basedpyright as a development dependency to your project (recommended)" we highly recommend installing and pinning basedpyright as a dev dependency for your project and using project settings (`.zed/settings.json`) to confiigure zed to use the that version instead of a globally installed version. this ensures that zed doesn't automatically update basedpyright unexpectedly to a version that your project may not be ready to use. ```json title=".zed/settings.json" { "lsp": { "basedpyright": { "binary": { "path": ".venv/bin/basedpyright-langserver", "arguments": ["--stdio"] } } } } ``` you should commit this file so that these settings are automatically applied for other developers working on your project. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/installation/prek-hook.md # prek hook ```yaml title=".pre-commit-config.yaml" repos: - repo: https://github.com/DetachHead/basedpyright-prek-mirror rev: {{ basedpyright_version() }} hooks: - id: basedpyright ``` !!! info "prek vs pre-commit" [prek](https://github.com/j178/prek) is a drop-in replacement for pre-commit written in rust, and with much better support from its maintainer. we highly recommend using it instead of pre-commit. ## should i use this? there are alternative approaches to commit hooks you may want to consider depending on your use case. ### checking your code before committing we instead recommend [integrating basedpyright with your IDE](./ides.md). doing so will show errors on your code as you write it, instead of waiting until you go to commit your changes. ### running non-python tools in a python project prek can be useful when the tool does not have a pypi package, because it can automatically manage nodejs and install npm packages for you without you ever having to install nodejs yourself. basedpyright already solves this problem with pyright by [bundling the npm package as a pypi package](../benefits-over-pyright/pypi-package-vscode-pinning.md), so you don't need to use prek. ### running basedpyright in the CI basedpyright already [integrates well with CI by default](../benefits-over-pyright/improved-ci-integration.md) when using the pypi package. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/shoutouts.md # Shoutouts some projects that helped make basedpyright possible ## [basedmypy](https://github.com/kotlinisland/basedmypy) basedmypy is a fork of mypy with a similar goal in mind: to fix some of the serious problems in mypy that do not seem to be a priority for the maintainers. it also adds many new features which may not be standardized but greatly improve the developer experience when working with python's far-from-perfect type system. basedmypy heavily inspired me to create basedpyright. while the two projects have similar goals, there are some differences: - basedmypy makes breaking changes to improve the typing system and its syntax. for example, it supports intersections, `(int) -> str` function type syntax and `foo is int` syntax for type guards. [more info here](https://kotlinisland.github.io/basedmypy/based_features.html) - basedpyright intends to be fully backwards compatible with all standard typing functionality. non-standard features will be fully optional and can be disabled, as we intend to support library developers who can't control what type checker their library is used with. - basedpyright's two main goals are to improve the type checker's accuracy and reliability with existing syntax, and to bridge the gap between pylance and pyright ## [pyright-inlay-hints](https://github.com/jbradaric/pyright-inlay-hints) one of the first pylance features we added was inlay hints and semantic highlighting. i had no clue where to begin until i found this project which had already done the bulk of the work which i was able to expand upon ## [docify](https://github.com/AThePeanut4/docify) used for [our builtin docstrings support](./benefits-over-pyright/pylance-features.md#docstrings-for-compiled-builtin-modules) ## [nodejs-wheel](https://github.com/njzjz/nodejs-wheel) this project made the basedpyright pypi package possible, which significantly simplified the process of installing pyright for python developers who aren't familiar with nodejs and npm. since we started using it in basedpyright, it has since been adopted by [the unofficial pyright pypi package](https://github.com/RobertCraigie/pyright-python/issues/231#issuecomment-2366599865) as well. ## [pyprojectx](https://github.com/pyprojectx/pyprojectx) this tool makes working on multiple different python projects so much less stressful. instead of installing all these project management tools like pdm, uv, etc. globally you can install and pin them inside your project without ever having to install anything first. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/usage/builtins.md ## Extending Builtins The Python interpreter implicitly adds a set of symbols that are available within every module even though they are not explicitly imported. These so-called “built in” symbols include commonly-used types and functions such as “list”, “dict”, “int”, “float”, “min”, and “len”. Pyright gains knowledge of which types are included in “builtins” scope through the type stub file `builtins.pyi`. This stub file comes from the typeshed github repo and is bundled with pyright, along with type stubs that describe other stdlib modules. Some Python environments are customized to include additional builtins symbols. If you are using such an environment, you may want to tell Pyright about these additional symbols that are available at runtime. To do so, you can add a local type stub file called `__builtins__.pyi`. This file can be placed at the root of your project directory or at the root of the subdirectory specified in the `stubPath` setting (which is named `typings` by default). --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/usage/commands.md # Language Server Commands basedpyright offers the following language server commands, which can be invoked from, for example, VS Code’s “Command Palette”, which can be accessed from the View menu or by pressing Cmd-Shift-P. ## Organize Imports This command reorders all imports found in the global (module-level) scope of the source file. As recommended in PEP8, imports are grouped into three groups, each separated by an empty line. The first group includes all built-in modules, the second group includes all third-party modules, and the third group includes all local modules. Within each group, imports are sorted alphabetically. And within each “from X import Y” statement, the imported symbols are sorted alphabetically. Pyright also rewraps any imports that don't fit within a single line, switching to multi-line formatting. !!! note we recommend using [ruff](https://docs.astral.sh/ruff/formatter/#sorting-imports) to organize your imports instead, because pyright does not provide a way to validate that imports are sorted via the CLI. ## Restart Server This command forces the type checker to discard all of its cached type information and restart analysis. It is useful in cases where new type stubs or libraries have been installed. ## Write new errors to baseline writes any new errors to the [baseline](../benefits-over-pyright/baseline.md) file. the language server will automatically update it on-save if errors are removed from a file and no new errors were added. for more information about when to use this command, [see here](../benefits-over-pyright/baseline.md#how-often-do-i-need-to-update-the-baseline-file). --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/usage/import-resolution.md ## Import Resolution ### Resolution Order If the import is relative (the module name starts with one or more dots), it resolves the import relative to the path of the importing source file. For absolute (non-relative) imports, Pyright employs the following resolution order: 1. Try to resolve using the **stubPath** as defined in the `stubPath` config entry or the `basedpyright.analysis.stubPath` setting. 2. Try to resolve using **code within the workspace**. * Try to resolve relative to the **root directory** of the execution environment. If no execution environments are specified in the config file, use the root of the workspace. For more information about execution environments, refer to the [configuration documentation](../configuration/config-files.md#execution-environment-options). * Try to resolve using any of the **extra paths** defined for the execution environment in the config file. If no execution environment applies, use the `basedpyright.analysis.extraPaths` setting. Extra paths are searched in the order in which they are provided in the config file or setting. * If no execution environment is configured, try to resolve using the **local directory `src`**. It is common for Python projects to place local source files within a directory of this name. 3. Try to resolve using **stubs or inlined types found within installed packages**. Pyright uses the configured Python environment to determine whether a package has been installed. For more details about how to configure your Python environment for Pyright, see below. If a Python environment is configured, Pyright looks in the `lib/site-packages`, `Lib/site-packages`, or `python*/site-packages` subdirectory. If no site-packages directory can be found, Pyright attempts to run the configured Python interpreter and ask it for its search paths. If no Python environment is configured, Pyright will use the default Python interpreter by invoking `python`. * For a given package, try to resolve first using a **stub package**. Stub packages, as defined in [PEP 561](https://www.python.org/dev/peps/pep-0561/#type-checker-module-resolution-order), are named the same as the original package but with “-stubs” appended. * Try to resolve using an **inline stub**, a “.pyi” file that ships within the package. * If the package contains a “py.typed” file as described in [PEP 561](https://www.python.org/dev/peps/pep-0561/), use inlined type annotations provided in “.py” files within the package. * If the `basedpyright.analysis.useLibraryCodeForTypes` setting is set to true, try to resolve using the **library implementation** (“.py” file). Some “.py” files may contain partial or complete type annotations. Pyright will use type annotations that are provided and do its best to infer any missing type information. 4. Try to resolve using a **stdlib typeshed stub**. If the `typeshedPath` is configured, use this instead of the typeshed stubs that are packaged with Pyright. This allows for the use of a newer or a patched version of the typeshed stdlib stubs. 5. Try to resolve using a **third-party typeshed** stub. If the `typeshedPath` is configured, use this instead of the typeshed stubs that are packaged with Pyright. This allows for the use of a newer or a patched version of the typeshed third-party stubs. 6. For an absolute import, if all of the above attempts fail, attempt to import a module from the same directory as the importing file and parent directories that are also children of the root workspace. This accommodates cases where it is assumed that a Python script will be executed from one of these subdirectories rather than from the root directory. ### Configuring Your Python Environment Pyright does not require a Python environment to be configured if all imports can be resolved using local files and type stubs. If a Python environment is configured, it will attempt to use the packages installed in the `site-packages` subdirectory during import resolution. Pyright uses the following mechanisms (in priority order) to determine which Python environment to use: 1. If a `venv` name is specified along with a `python.venvPath` setting (or a `--venvpath` command-line argument), it appends the venv name to the specified venv path. This mechanism is not recommended for most users because it is less robust than the next two options because it relies on pyright’s internal logic to determine the import resolution paths based on the virtual environment directories and files. The other two mechanisms (2 and 3 below) use the configured python interpreter to determine the import resolution paths (the value of `sys.path`). 2. Use the `python.pythonPath` setting. This setting is defined in the language server. if using VS Code, this can be configured using the Python extension’s environment picker interface. More recent versions of the Python extension no longer store the selected Python environment in the `python.pythonPath` setting and instead use a storage mechanism that is private to the extension. Pyright is able to access this through an API exposed by the Python extension. 3. If a virtual environment exists in a `.venv` folder at the project root, its python interpreter its used as the default value for `python.pythonPath`. [more info](../benefits-over-pyright/better-defaults.md#default-value-for-pythonpath) 4. As a fallback, use the default Python environment (i.e. the one that is invoked when typing `python` in the shell). ### Editable installs If you want to use static analysis tools with an editable install, you should configure the editable install to use `.pth` files that contain file paths rather than executable lines (prefixed with `import`) that install import hooks. See your build backend’s documentation for details on how to do this. We have provided some basic information for common build backends below. Import hooks can provide an editable installation that is a more accurate representation of your real installation. However, because resolving module locations using an import hook requires executing Python code, they are not usable by Pyright and other static analysis tools. Therefore, if your editable install is configured to use import hooks, Pyright will be unable to find the corresponding source files. #### setuptools `setuptools` supports two ways to avoid import hooks: - [compat mode](https://setuptools.pypa.io/en/latest/userguide/development_mode.html#legacy-behavior) - [strict mode](https://setuptools.pypa.io/en/latest/userguide/development_mode.html#strict-editable-installs) #### uv [uv's build backend](https://docs.astral.sh/uv/concepts/build-backend/) uses `.pth` files by default. #### Hatchling [Hatchling](https://hatch.pypa.io/latest/config/build/#dev-mode) uses path-based `.pth` files by default. It will only use import hooks if you set `dev-mode-exact` to `true`. #### PDM [PDM](https://pdm.fming.dev/latest/pyproject/build/#editable-build-backend) uses path-based `.pth` files by default. It will only use import hooks if you set `editable-backend` to `"editables"`. ### Debugging Import Resolution Problems The import resolution mechanisms in Python are complicated, and Pyright offers many configuration options. If you are encountering problems with import resolution, Pyright provides additional logging that may help you identify the cause. To enable verbose logging, pass `--verbose` as a command-line argument or add the following entry to the config file `"verboseOutput": true`. If you are using the Pyright VS Code extension, the additional logging will appear in the Output tab (select “Pyright” from the menu). Please include this verbose logging when reporting import resolution bugs. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/usage/import-statements.md ## Import Statements ### Loader Side Effects An import statement instructs the Python import loader to perform several operations. For example, the statement `from a.b import Foo as Bar` causes the following steps to be performed at runtime: 1. Load and execute module `a` if it hasn’t previously been loaded. Cache a reference to `a`. 2. Load and execute submodule `b` if it hasn’t previously been loaded. 3. Store a reference to submodule `b` to the variable `b` within module `a`’s namespace. 4. Look up attribute `Foo` within module `b`. 5. Assign the value of attribute `Foo` to a local variable called `Bar`. If another source file were to subsequently execute the statement `import a`, it would observe `b` in the namespace of `a` as a side effect of step 3 in the the earlier import operation. Relying on such side effects leads to fragile code because a change in execution ordering or a modification to one module can break code in another module. Reliance on such side effects is therefore considered a bug by Pyright, which intentionally does not attempt to model such side effects. ### Implicit Module Loads Pyright models two loader side effects that are considered safe and are commonly used in Python code. 1. If an import statement targets a multi-part module name and does not use an alias, all modules within the multi-part module name are assumed to be loaded. For example, the statement `import a.b.c` is treated as though it is three back-to-back import statements: `import a`, `import a.b` and `import a.b.c`. This allows for subsequent use of all symbols in `a`, `a.b`, and `a.b.c`. If an alias is used (e.g. `import a.b.c as abc`), this is assumed to load only module `c`. A subsequent `import a` would not provide access to `a.b` or `a.b.c`. 2. If an `__init__.py` file includes an import statement of the form `from . import a`, the local variable `a` is assigned a reference to submodule `a`. Similarly, if an `__init__.py` file includes an import statement of the form `from .a import b`, the local variable `a` is assigned a reference to submodule `a`. This statement form is treated as though it is two back-to-back import statements: `from . import a` followed by `from .a import b`. ### Unsupported Loader Side Effects All other module loader side effects are intentionally _not_ modeled by Pyright and should not be relied upon in code. Examples include: - If one module contains the statement `import a.b` and a second module includes `import a`, the second module should not rely on the fact that `a.b` is now accessible as a side effect of the first module’s import. - If a module contains the statement `import a.b` in the global scope and a function that includes the statement `import a` or `import a.c`, the function should not assume that it can access `a.b`. This assumption might or might not be safe depending on execution order. - If a module contains the statements `import a.b as foo` and `import a`, code within that module should not assume that it can access `a.b`. Such an assumption might be safe depending on the relative order of the statements and the order in which they are executed, but it leads to fragile code. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/usage/mypy-comparison.md ## Differences Between Pyright and Mypy ### What is Mypy? Mypy is the “OG” in the world of Python type checkers. It was started by Jukka Lehtosalo in 2012 with contributions from Guido van Rossum, Ivan Levkivskyi, and many others over the years. For a detailed history, refer to [this documentation](http://mypy-lang.org/about.html). The code for mypy can be found in [this github project](https://github.com/python/mypy). ### Why Does Pyright’s Behavior Differ from Mypy’s? Mypy served as a reference implementation of [PEP 484](https://www.python.org/dev/peps/pep-0484/), which defines standard behaviors for Python static typing. Although PEP 484 spells out many type checking behaviors, it intentionally leaves many other behaviors undefined. This approach has allowed different type checkers to innovate and differentiate. Pyright generally adheres to the official [Python typing specification](https://typing.readthedocs.io/en/latest/spec/index.html), which incorporates and builds upon PEP 484 and other typing-related PEPs. The typing spec is accompanied by an ever-expanding suite of conformance tests. For the latest conformance test results for pyright, mypy and other type checkers, refer to [this page](https://htmlpreview.github.io/?https://github.com/python/typing/blob/main/conformance/results/results.html). For behaviors that are not explicitly spelled out in the typing spec, pyright generally tries to adhere to mypy’s behavior unless there is a compelling justification for deviating. This document discusses these differences and provides the reasoning behind each design choice. ### Design Goals Pyright was designed with performance in mind. It is not unusual for pyright to be 3x to 5x faster than mypy when type checking large code bases. Some of its design decisions were motivated by this goal. Pyright was also designed to be used as the foundation for a Python [language server](https://microsoft.github.io/language-server-protocol/). Language servers provide interactive programming features such as completion suggestions, function signature help, type information on hover, semantic-aware search, semantic-aware renaming, semantic token coloring, refactoring tools, etc. For a good user experience, these features require highly responsive type evaluation performance during interactive code modification. They also require type evaluation to work on code that is incomplete and contains syntax errors. To achieve these design goals, pyright is implemented as a “lazy” or “just-in-time” type evaluator. Rather than analyzing all code in a module from top to bottom, it is able to evaluate the type of an arbitrary identifier anywhere within a module. If the type of that identifier depends on the types of other expressions or symbols, pyright recursively evaluates those in turn until it has enough information to determine the type of the target identifier. By comparison, mypy uses a more traditional multi-pass architecture where semantic analysis is performed multiple times on a module from the top to the bottom until all types converge. Pyright implements its own parser, which recovers gracefully from syntax errors and continues parsing the remainder of the source file. By comparison, mypy uses the parser built in to the Python interpreter, and it does not support recovery after a syntax error. This also means that when you run mypy on an older version of Python, it cannot support newer language features that require grammar changes. ### Type Checking Unannotated Code By default, pyright performs type checking for all code regardless of whether it contains type annotations. This is important for language server features. It is also important for catching bugs in code that is unannotated. By default, mypy skips all functions or methods that do not have type annotations. This is a common source of confusion for mypy users who are surprised when type violations in unannotated functions go unreported. If the option `--check-untyped-defs` is enabled, mypy performs type checking for all functions and methods. ### Inferred Return Types If a function or method lacks a return type annotation, pyright infers the return type from `return` and `yield` statements within the function’s body (including the implied `return None` at the end of the function body). This is important for supporting completion suggestions. It also improves type checking coverage and eliminates the need for developers to needlessly supply return type annotations for trivial return types. By comparison, mypy never infers return types and assumes that functions without a return type annotation have a return type of `Any`. This was an intentional design decision by mypy developers and is explained in [this thread](https://github.com/python/mypy/issues/10149). ### Unions vs Joins When merging two types during code flow analysis or widening types during constraint solving, pyright always uses a union operation. Mypy typically (but not always) uses a “join” operation, which merges types by finding a common supertype. The use of joins discards valuable type information and leads to many false positive errors that are [well documented within the mypy issue tracker](https://github.com/python/mypy/issues?q=is%3Aissue+is%3Aopen+label%3Atopic-join-v-union). ```python def func1(val: object): if isinstance(val, str): pass elif isinstance(val, int): pass else: return reveal_type(val) # mypy: object, pyright: str | int ``` ### Variable Type Declarations Pyright treats variable type annotations as type declarations. If a variable is not annotated, pyright allows any value to be assigned to that variable, and its type is inferred to be the union of all assigned types. Mypy’s behavior for variables depends on whether [`--allow-redefinition`](https://mypy.readthedocs.io/en/stable/command_line.html#cmdoption-mypy-allow-redefinition) is specified. If redefinitions are not allowed, then mypy typically treats the first assignment (the one with the smallest line number) as though it is an implicit type declaration. ```python def func1(condition: bool): if condition: x = 3 # Mypy treats this as an implicit type declaration else: x = "" # Mypy treats this as an error because `x` is implicitly declared as `int` def func2(condition: bool): x = None # Mypy provides some exceptions; this is not considered an implicit type declaration if condition: x = "" # This is not considered an error def func3(condition: bool): x = [] # Mypy doesn't treat this as a declaration if condition: x = [1, 2, 3] # The type of `x` is declared as `list[int]` ``` Pyright’s behavior is more consistent, is conceptually simpler and more natural for Python developers, leads to fewer false positives, and eliminates the need for many otherwise-necessary variable type annotations. ### Class and Instance Variable Inference Pyright handles instance and class variables consistently with local variables. If a type annotation is provided for an instance or class variable (either within the class or one of its base classes), pyright treats this as a type declaration and enforces it accordingly. If a class implementation does not provide a type annotation for an instance or class variable and its base classes likewise do not provide a type annotation, the variable’s type is inferred from all assignments within the class implementation. ```python class A: def method1(self) -> None: self.x = 1 def method2(self) -> None: self.x = "" # Mypy treats this as an error because `x` is implicitly declared as `int` a = A() reveal_type(a.x) # pyright: int | str a.x = "" # Pyright allows this because the type of `x` is `int | str` a.x = 3.0 # Pyright treats this as an error because the type of `x` is `int | str` ``` ### Class and Instance Variable Enforcement Pyright distinguishes between “pure class variables”, “regular class variables”, and “pure instance variable”. For a detailed explanation, refer to [this documentation](type-concepts-advanced.md#class-and-instance-variables). Mypy does not distinguish between class variables and instance variables in all cases. This is a [known issue](https://github.com/python/mypy/issues/240). ```python class A: x: int = 0 # Regular class variable y: ClassVar[int] = 0 # Pure class variable def __init__(self): self.z = 0 # Pure instance variable print(A.x) print(A.y) print(A.z) # pyright: error, mypy: no error ``` ### Assignment-based Type Narrowing Pyright applies type narrowing for variable assignments. This is done regardless of whether the assignment statement includes a variable type annotation. Mypy skips assignment-based type narrowing when the target variable includes a type annotation. The consensus of the typing community is that mypy’s behavior here is inconsistent, and there are [plans to eliminate this inconsistency](https://github.com/python/mypy/issues/2008). ```python v1: Sequence[int] v1 = [1, 2, 3] reveal_type(v1) # mypy and pyright both reveal `list[int]` v2: Sequence[int] = [1, 2, 3] reveal_type(v2) # mypy reveals `Sequence[int]` rather than `list[int]` ``` ### Type Guards Pyright supports several built-in type guards that mypy does not currently support. For a full list of type guard expression forms supported by pyright, refer to [this documentation](type-concepts-advanced.md#type-guards). The following expression forms are not currently supported by mypy as type guards: * `x == L` and `x != L` (where L is an expression with a literal type) * `x in y` or `x not in y` (where y is instance of list, set, frozenset, deque, tuple, dict, defaultdict, or OrderedDict) * `bool(x)` (where x is any expression that is statically verifiable to be truthy or falsey in all cases) ### Aliased Conditional Expressions Pyright supports the [aliasing of conditional expressions](type-concepts-advanced.md#aliased-conditional-expression) used for type guards. Mypy does not currently support this, but it is a frequently-requested feature. ### Narrowing Any Pyright never narrows `Any` when performing type narrowing for assignments. Mypy is inconsistent about when it applies type narrowing to `Any` type arguments. ```python b: list[Any] b = [1, 2, 3] reveal_type(b) # pyright: list[Any], mypy: list[Any] c = [1, 2, 3] b = c reveal_type(b) # pyright: list[Any], mypy: list[int] ``` ### Inference of List, Set, and Dict Expressions Pyright’s inference rules for [list, set and dict expressions](type-inference.md#list-expressions) differ from mypy’s when values with heterogeneous types are used. Mypy uses a join operator to combine the types. Pyright uses either an `Unknown` or a union depending on configuration settings. A join operator often produces a type that is not what was intended, and this leads to false positive errors. ```python x = [1, 3.4, ""] reveal_type(x) # mypy: list[object], pyright: list[Unknown] or list[int | float | str] ``` For these mutable container types, pyright does not retain literal types when inferring the container type. Mypy is inconsistent, sometimes retaining literal types and sometimes not. ```python def func(one: Literal[1]): reveal_type(one) # Literal[1] reveal_type([one]) # pyright: list[int], mypy: list[Literal[1]] reveal_type(1) # Literal[1] reveal_type([1]) # pyright: list[int], mypy: list[int] ``` ### Inference of Tuple Expressions Pyright’s inference rules for [tuple expressions](type-inference.md#tuple-expressions) differ from mypy’s when tuple entries contain literals. Pyright retains these literal types, but mypy widens the types to their non-literal type. Pyright retains the literal types in this case because tuples are immutable, and more precise (narrower) types are almost always beneficial in this situation. ```python x = (1, "stop") reveal_type(x[1]) # pyright: Literal["stop"], mypy: str y: Literal["stop", "go"] = x[1] # mypy: type error ``` ### Assignment-Based Narrowing for Literals When assigning a literal value to a variable, pyright narrows the type to reflect the literal. Mypy does not. Pyright retains the literal types in this case because more precise (narrower) types are typically beneficial and have little or no downside. ```python x: str | None x = 'a' reveal_type(x) # pyright: Literal['a'], mypy: str ``` Pyright also supports “literal math” for simple operations involving literals. ```python def func1(a: Literal[1, 2], b: Literal[2, 3]): c = a + b reveal_type(c) # Literal[3, 4, 5] def func2(): c = "hi" + " there" reveal_type(c) # Literal['hi there'] ``` ### Type Narrowing for Asymmetric Descriptors When pyright evaluates a write to a class variable that contains a descriptor object (including properties), it normally applies assignment-based type narrowing. However, when the descriptor is asymmetric — that is, its “getter” type is different from its “setter” type, pyright refrains from applying assignment-based type narrowing. For a full discussion of this, refer to [this issue](https://github.com/python/mypy/issues/3004). Mypy has not yet implemented the agreed-upon behavior, so its type narrowing behavior may differ from pyright’s in this case. ### Parameter Type Inference Mypy infers the type of `self` and `cls` parameters in methods but otherwise does not infer any parameter types. Pyright implements several parameter type inference techniques that improve type checking and language service features in the absence of explicit parameter type annotations. For details, refer to [this documentation](type-inference.md#parameter-type-inference). ### Constructor Calls When pyright evaluates a call to a constructor, it attempts to follow the runtime behavior as closely as possible. At runtime, when a constructor is called, it invokes the `__call__` method of the metaclass. Most classes use `type` as their metaclass. (Even when a different metaclasses is used, it typically does not override `type.__call__`.) The `type.__call__` method calls the `__new__` method for the class and passes all of the arguments (both positional and keyword) that were passed to the constructor call. If the `__new__` method returns an instance of the class (or a child class), `type.__call__` then calls the `__init__` method on the class. Pyright follows this same flow for evaluating the type of a constructor call. If a custom metaclass is present, pyright evaluates its `__call__` method to determine whether it returns an instance of the class. If not, it assumes that the metaclass has custom behavior that overrides `type.__call__`. Likewise, if a class provides a `__new__` method that returns a type other than the class being constructed (or a child class thereof), it assumes that `__init__` will not be called. By comparison, mypy first evaluates the `__init__` method if present, and it ignores the annotated return type of the `__new__` method. ### `None` Return Type If the return type of a function is declared as `None`, an attempt to call that function and consume the returned value is flagged as an error by mypy. The justification is that this is a common source of bugs. Pyright does not special-case `None` in this manner because there are legitimate use cases, and in our experience, this class of bug is rare. ### Constraint Solver Behaviors When evaluating a call expression that invokes a generic class constructor or a generic function, a type checker performs a process called “constraint solving” to solve the type variables found within the target function signature. The solved type variables are then applied to the return type of that function to determine the final type of the call expression. This process is called “constraint solving” because it takes into account various constraints that are specified for each type variable. These constraints include variance rules and type variable bounds. Many aspects of constraint solving are unspecified in PEP 484. This includes behaviors around literals, whether to use unions or joins to widen types, and how to handle cases where multiple types could satisfy all type constraints. #### Constraint Solver: Literals Pyright’s constraint solver retains literal types only when they are required to satisfy constraints. In other cases, it widens the type to a non-literal type. Mypy is inconsistent in its handling of literal types. ```python T = TypeVar("T") def identity(x: T) -> T: return x def func(one: Literal[1]): reveal_type(one) # Literal[1] v1 = identity(one) reveal_type(v1) # pyright: int, mypy: Literal[1] reveal_type(1) # Literal[1] v2 = identity(1) reveal_type(v2) # pyright: int, mypy: int ``` #### Constraint Solver: Type Widening As mentioned previously, pyright always uses unions rather than joins. Mypy typically uses joins. This applies to type widening during the constraint solving process. ```python T = TypeVar("T") def func(val1: T, val2: T) -> T: ... reveal_type(func("", 1)) # mypy: object, pyright: str | int ``` #### Constraint Solver: Ambiguous Solution Scoring In cases where more than one solution is possible for a type variable, both pyright and mypy employ various heuristics to pick the “best” solution. These heuristics are complex and difficult to document in their fullness. Pyright’s general strategy is to return the “simplest” type that meets the constraints. Consider the expression `make_list(x)` in the example below. The type constraints for `T` could be satisfied with either `int` or `list[int]`, but it’s much more likely that the developer intended the former (simpler) solution. Pyright calculates all possible solutions and “scores” them according to complexity, then picks the type with the best score. In rare cases, there can be two results with the same score, in which chase pyright arbitrarily picks one as the winner. Mypy produces errors with this sample. ```python T = TypeVar("T") def make_list(x: T | Iterable[T]) -> list[T]: return list(x) if isinstance(x, Iterable) else [x] def func2(x: list[int], y: list[str] | int): v1 = make_list(x) reveal_type(v1) # pyright: "list[int]" ("list[list[T]]" is also a valid answer) v2 = make_list(y) reveal_type(v2) # pyright: "list[int | str]" ("list[list[str] | int]" is also a valid answer) ``` ### Value-Constrained Type Variables When mypy analyzes a class or function that has in-scope value-constrained TypeVars, it analyzes the class or function multiple times, once for each constraint. This can produce multiple errors. ```python T = TypeVar("T", list[Any], set[Any]) def func(a: AnyStr, b: T): reveal_type(a) # Mypy reveals 2 different types ("str" and "bytes"), pyright reveals "AnyStr" return a + b # Mypy reports 4 errors ``` Pyright cannot use the same multi-pass technique as mypy in this case. It needs to produce a single type for any given identifier to support language server features. Pyright instead uses a mechanism called [conditional types](type-concepts-advanced.md#conditional-types-and-type-variables). This approach allows pyright to handle some value-constrained TypeVar use cases that mypy cannot, but there are conversely other use cases that mypy can handle and pyright cannot. ### “Unknown” Type and Strict Mode Pyright differentiates between explicit and implicit forms of `Any`. The implicit form is referred to as [`Unknown`](type-inference.md#unknown-type). For example, if a parameter is annotated as `list[Any]`, that is a use of an explicit `Any`, but if a parameter is annotated as `list`, that is an implicit `Any`, so pyright refers to this type as `list[Unknown]`. Pyright implements several checks that are enabled in “strict” type-checking modes that report the use of an `Unknown` type. Such uses can mask type errors. Mypy does not track the difference between explicit and implicit `Any` types, but it supports various checks that report the use of values whose type is `Any`: `--warn-return-any` and `--disallow-any-*`. For details, refer to [this documentation](https://mypy.readthedocs.io/en/stable/command_line.html#disallow-dynamic-typing). Pyright’s approach gives developers more control. It provides a way to be explicit about `Any` where that is the intent. When an `Any` is implicitly produced due to an missing type argument or some other condition that produces an `Any` within the type checker logic, the developer is alerted to that condition. ### Overload Resolution Overload resolution rules are under-specified in PEP 484. Pyright and mypy apply similar rules, but there are inevitably cases where different results will be produced. For full documentation of pyright’s overload behaviors, refer to [this documentation](type-concepts-advanced.md#overloads). One known difference is in the handling of ambiguous overloads due to `Any` argument types where one return type is the supertype of all other return types. In this case, pyright evaluates the resulting return type as the supertype, but mypy evaluates the return type as `Any`. Pyright’s behavior here tries to preserve as much type information as possible, which is important for completion suggestions. ```python @overload def func1(x: int) -> int: ... @overload def func1(x: str) -> float: ... def func2(val: Any): reveal_type(func1(val)) # mypy: Any, pyright: float ``` ### Import Statements Pyright intentionally does not model implicit side effects of the Python import loading mechanism. In general, such side effects cannot be modeled statically because they depend on execution order. Dependency on such side effects leads to fragile code, so pyright treats these as errors. For more details, refer to [this documentation](import-statements.md). Mypy models side effects of the import loader that are potentially unsafe. ```python import http def func(): import http.cookies # The next line raises an exception at runtime x = http.cookies # mypy allows, pyright flags as error ``` ### Ellipsis in Function Body If Pyright encounters a function body whose implementation is `...`, it does not enforce the return type annotation. The `...` semantically means “this is a code placeholder” — a convention established in type stubs, protocol definitions, and elsewhere. Mypy treats `...` function bodies as though they are executable and enforces the return type annotation. This was a recent change in mypy — made long after Pyright established a different behavior. Prior to mypy’s recent change, it did not enforce return types for function bodies consisting of either `...` or `pass`. Now it enforces both. ### Circular References Because mypy is a multi-pass analyzer, it is able to deal with certain forms of circular references that pyright cannot handle. Here are several examples of circularities that mypy resolves without errors but pyright does not. 1. A class declaration that references a metaclass whose declaration depends on the class. ```python T = TypeVar("T") class MetaA(type, Generic[T]): ... class A(metaclass=MetaA["A"]): ... ``` 2. A class declaration that uses a TypeVar whose bound or constraint depends on the class. ```python T = TypeVar("T", bound="A") class A(Generic[T]): ... ``` 3. A class that is decorated with a class decorator that uses the class in the decorator’s own signature. ```python def my_decorator(x: Callable[..., "A"]) -> Callable[..., "A"]: return x @my_decorator class A: ... ``` ### Class Decorator Evaluation Pyright honors class decorators. Mypy largely ignores them. See [this issue](https://github.com/python/mypy/issues/3135) for details. ### Support for Type Comments Versions of Python prior to 3.0 did not have a dedicated syntax for supplying type annotations. Annotations therefore needed to be supplied using “type comments” of the form `# type: `. Python 3.6 added the ability to supply type annotations for variables. Mypy has full support for type comments. Pyright supports type comments only in locations where there is a way to provide an annotation using modern syntax. Pyright was written to assume Python 3.5 and newer, so support for older versions was not a priority. ```python # The following type comment is supported by # mypy but is rejected by pyright. x, y = (3, 4) # type: (float, float) # Using Python syntax from Python 3.6, this # would be annotated as follows: x: float y: float x, y = (3, 4) ``` ### Plugins Mypy supports a plug-in mechanism, whereas pyright does not. Mypy plugins allow developers to extend mypy’s capabilities to accommodate libraries that rely on behaviors that cannot be described using the standard type checking mechanisms. Pyright maintainers have made the decision not to support plug-ins because of their many downsides: discoverability, maintainability, cost of development for the plug-in author, cost of maintenance for the plug-in object model and API, security, performance (especially latency — which is critical for language servers), and robustness. Instead, we have taken the approach of working with the typing community and library authors to extend the type system so it can accommodate more use cases. An example of this is [PEP 681](https://peps.python.org/pep-0681/), which introduced `dataclass_transform`. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/usage/type-concepts-advanced.md ## Static Typing: Advanced Topics ### Type Narrowing Pyright uses a technique called “type narrowing” to track the type of an expression based on code flow. Consider the following code: ```python val_str: str = "hi" val_int: int = 3 def func(val: float | str | complex, test: bool): reveal_type(val) # int | str | complex val = val_int # Type is narrowed to int reveal_type(val) # int if test: val = val_str # Type is narrowed to str reveal_type(val) # str reveal_type(val) # int | str if isinstance(val, int): reveal_type(val) # int print(val) else: reveal_type(val) # str print(val) ``` At the start of this function, the type checker knows nothing about `val` other than that its declared type is `float | str | complex`. Then it is assigned a value that has a known type of `int`. This is a legal assignment because `int` is considered a subclass of `float`. At the point in the code immediately after the assignment, the type checker knows that the type of `val` is an `int`. This is a “narrower” (more specific) type than `float | str | complex`. Type narrowing is applied whenever a symbol is assigned a new value. Another assignment occurs several lines further down, this time within a conditional block. The symbol `val` is assigned a value known to be of type `str`, so the narrowed type of `val` is now `str`. Once the code flow of the conditional block merges with the main body of the function, the narrowed type of `val` becomes `int | str` because the type checker cannot statically predict whether the conditional block will be executed at runtime. Another way that types can be narrowed is through the use of conditional code flow statements like `if`, `while`, and `assert`. Type narrowing applies to the block of code that is “guarded” by that condition, so type narrowing in this context is sometimes referred to as a “type guard”. For example, if you see the conditional statement `if x is None:`, the code within that `if` statement can assume that `x` contains `None`. Within the code sample above, we see an example of a type guard involving a call to `isinstance`. The type checker knows that `isinstance(val, int)` will return True only in the case where `val` contains a value of type `int`, not type `str`. So the code within the `if` block can assume that `val` contains a value of type `int`, and the code within the `else` block can assume that `val` contains a value of type `str`. This demonstrates how a type (in this case `int | str`) can be narrowed in both a positive (`if`) and negative (`else`) test. The following expression forms support type narrowing: * `` (where `` is an identifier) * `.` (member access expression where `` is a supported expression form) * ` := ` (assignment expression where `` is a supported expression form) * `[]` (subscript expression where `` is a non-negative integer) * `[]` (subscript expression where `` is a string literal) Examples of expressions that support type narrowing: * `my_var` * `employee.name` * `a.foo.next` * `args[3]` * `kwargs["bar"]` * `a.b.c[3]["x"].d` ### Type Guards In addition to assignment-based type narrowing, Pyright supports the following type guards. * `x is None` and `x is not None` * `x == None` and `x != None` * `x is ...` and `x is not ...` (where `...` is an ellipsis token) * `x == ...` and `x != ...` (where `...` is an ellipsis token) * `x is S` and `x is not S` (where S is a Sentinel) * `type(x) is T` and `type(x) is not T` * `type(x) == T` and `type(x) != T` * `x is L` and `x is not L` (where L is an expression that evaluates to a literal type) * `x is C` and `x is not C` (where C is a class) * `x == L` and `x != L` (where L is an expression that evaluates to a literal type) * `x.y is None` and `x.y is not None` (where x is a type that is distinguished by a field with a None) * `x.y is E` and `x.y is not E` (where E is a literal enum or bool and x is a type that is distinguished by a field with a literal type) * `x.y == LN` and `x.y != LN` (where LN is a literal expression or `None` and x is a type that is distinguished by a field or property with a literal type) * `x[K] == V`, `x[K] != V`, `x[K] is V`, and `x[K] is not V` (where K and V are literal expressions and x is a type that is distinguished by a TypedDict field with a literal type) * `x[I] == V` and `x[I] != V` (where I and V are literal expressions and x is a known-length tuple that is distinguished by the index indicated by I) * `x[I] is B` and `x[I] is not B` (where I is a literal expression, B is a `bool` or enum literal, and x is a known-length tuple that is distinguished by the index indicated by I) * `x[I] is None` and `x[I] is not None` (where I is a literal expression and x is a known-length tuple that is distinguished by the index indicated by I) * `len(x) == L`, `len(x) != L`, `len(x) < L`, etc. (where x is tuple and L is an expression that evaluates to an int literal type) * `x in y` or `x not in y` (where y is instance of list, set, frozenset, deque, tuple, dict, defaultdict, or OrderedDict) * `S in D` and `S not in D` (where S is a string literal and D is a TypedDict) * `isinstance(x, T)` (where T is a type or a tuple of types) * `issubclass(x, T)` (where T is a type or a tuple of types) * `f(x)` (where f is a user-defined type guard as defined in [PEP 647](https://www.python.org/dev/peps/pep-0647/) or [PEP 742](https://www.python.org/dev/peps/pep-0742)) * `bool(x)` (where x is any expression that is statically verifiable to be truthy or falsey in all cases) * `x` (where x is any expression that is statically verifiable to be truthy or falsey in all cases) Expressions supported for type guards include simple names, member access chains (e.g. `a.b.c.d`), the unary `not` operator, the binary `and` and `or` operators, subscripts that are integer literals (e.g. `a[2]` or `a[-1]`), and call expressions. Other operators (such as arithmetic operators or other subscripts) are not supported. Some type guards are able to narrow in both the positive and negative cases. Positive cases are used in `if` statements, and negative cases are used in `else` statements. (Positive and negative cases are flipped if the type guard expression is preceded by a `not` operator.) In some cases, the type can be narrowed only in the positive or negative case but not both. Consider the following examples: ```python class Foo: pass class Bar: pass def func1(val: Foo | Bar): if isinstance(val, Bar): reveal_type(val) # Bar else: reveal_type(val) # Foo def func2(val: float | None): if val: reveal_type(val) # float else: reveal_type(val) # float | None ``` In the example of `func1`, the type was narrowed in both the positive and negative cases. In the example of `func2`, the type was narrowed only the positive case because the type of `val` might be either `float` (specifically, a value of 0.0) or `None` in the negative case. ### Aliased Conditional Expression Pyright also supports a type guard expression `c`, where `c` is an identifier that refers to a local variable that is assigned one of the above supported type guard expression forms. These are called “aliased conditional expressions”. Examples include `c = a is not None` and `c = isinstance(a, str)`. When “c” is used within a conditional check, it can be used to narrow the type of expression `a`. This pattern is supported only in cases where `c` is a local variable within a module or function scope and is assigned a value only once. It is also limited to cases where expression `a` is a simple identifier (as opposed to a member access expression or subscript expression), is local to the function or module scope, and is assigned only once within the scope. Unary `not` operators are allowed for expression `a`, but binary `and` and `or` are not. ```python def func1(x: str | None): is_str = x is not None if is_str: reveal_type(x) # str else: reveal_type(x) # None ``` ```python def func2(val: str | bytes): is_str = not isinstance(val, bytes) if not is_str: reveal_type(val) # bytes else: reveal_type(val) # str ``` ```python def func3(x: list[str | None]) -> str: is_str = x[0] is not None if is_str: # This technique doesn't work for subscript expressions, # so x[0] is not narrowed in this case. reveal_type(x[0]) # str | None ``` ```python def func4(x: str | None): is_str = x is not None if is_str: # This technique doesn't work in cases where the target # expression is assigned elsewhere. Here `x` is assigned # elsewhere in the function, so its type is not narrowed # in this case. reveal_type(x) # str | None x = "" ``` ### Narrowing for Implied Else When an “if” or “elif” clause is used without a corresponding “else”, Pyright will generally assume that the code can “fall through” without executing the “if” or “elif” block. However, there are cases where the analyzer can determine that a fall-through is not possible because the “if” or “elif” is guaranteed to be executed based on type analysis. ```python def func1(x: int): if x == 1 or x == 2: y = True print(y) # Error: "y" is possibly unbound def func2(x: Literal[1, 2]): if x == 1 or x == 2: y = True print(y) # No error ``` This can be especially useful when exhausting all members in an enum or types in a union. ```python from enum import Enum class Color(Enum): RED = 1 BLUE = 2 GREEN = 3 def func3(color: Color) -> str: if color == Color.RED or color == Color.BLUE: return "yes" elif color == Color.GREEN: return "no" def func4(value: str | int) -> str: if isinstance(value, str): return "received a str" elif isinstance(value, int): return "received an int" ``` If you later added another color to the `Color` enumeration above (e.g. `YELLOW = 4`), Pyright would detect that `func3` no longer exhausts all members of the enumeration and possibly returns `None`, which violates the declared return type. Likewise, if you modify the type of the `value` parameter in `func4` to expand the union, a similar error will be produced. This “narrowing for implied else” technique works for all narrowing expressions listed above with the exception of simple falsey/truthy statements and type guards. It is also limited to simple names and doesn’t work with member access or index expressions, and it requires that the name has a declared type (an explicit type annotation). These limitations are imposed because this functionality would otherwise have significant impact on analysis performance. ### Narrowing Any In general, the type `Any` is not narrowed. The only exceptions to this rule are the built-in `isinstance` and `issubclass` type guards, class pattern matching in “match” statements, and user-defined type guards. In all other cases, `Any` is left as is, even for assignments. ```python a: Any = 3 reveal_type(a) # Any a = "hi" reveal_type(a) # Any ``` The same applies to `Any` when it is used as a type argument. ```python b: Iterable[Any] = [1, 2, 3] reveal_type(b) # list[Any] c: Iterable[str] = [""] b = c reveal_type(b) # list[Any] ``` ### Narrowing for Captured Variables If a variable’s type is narrowed in an outer scope and the variable is subsequently captured by an inner-scoped function or lambda, Pyright retains the narrowed type if it can determine that the value of the captured variable is not modified on any code path after the inner-scope function or lambda is defined and is not modified in another scope via a `nonlocal` or `global` binding. ```python def func(val: int | None): if val is not None: def inner_1() -> None: reveal_type(val) # int print(val + 1) inner_2 = lambda: reveal_type(val) + 1 # int inner_1() inner_2() ``` ### Value-Constrained Type Variables When a TypeVar is defined, it can be constrained to two or more types (values). ```python # Example of unconstrained type variable _T = TypeVar("_T") # Example of value-constrained type variables _StrOrFloat = TypeVar("_StrOrFloat", str, float) ``` When a value-constrained TypeVar appears more than once within a function signature, the type provided for all instances of the TypeVar must be consistent. ```python def add(a: _StrOrFloat, b: _StrOrFloat) -> _StrOrFloat: return a + b # The arguments for `a` and `b` are both `str` v1 = add("hi", "there") reveal_type(v1) # str # The arguments for `a` and `b` are both `float` v2 = add(1.3, 2.4) reveal_type(v2) # float # The arguments for `a` and `b` are inconsistent types v3 = add(1.3, "hi") # Error ``` ### Conditional Types and Type Variables When checking the implementation of a function that uses type variables in its signature, the type checker must verify that type consistency is guaranteed. Consider the following example, where the input parameter and return type are both annotated with a type variable. The type checker must verify that if a caller passes an argument of type `str`, then all code paths must return a `str`. Likewise, if a caller passes an argument of type `float`, all code paths must return a `float`. ```python def add_one(value: _StrOrFloat) -> _StrOrFloat: if isinstance(value, str): sum = value + "1" else: sum = value + 1 reveal_type(sum) # str* | float* return sum ``` The type of variable `sum` is reported with a star (`*`). This indicates that internally the type checker is tracking the type as a “conditional” type. In this particular example, it indicates that `sum` is a `str` type if the parameter `value` is a `str` but is a `float` if `value` is a `float`. By tracking these conditional types, the type checker can verify that the return type is consistent with the return type `_StrOrFloat`. Conditional types are a form of _intersection_ type, and they are considered subtypes of both the concrete type and the type variable. ### Inferred Type of “self” and “cls” Parameters When a type annotation for a method’s `self` or `cls` parameter is omitted, pyright will infer its type based on the class that contains the method. The inferred type is internally represented as a type variable that is bound to the class. The type of `self` is represented as `Self@ClassName` where `ClassName` is the class that contains the method. Likewise, the `cls` parameter in a class method will have the type `Type[Self@ClassName]`. ```python class Parent: def method1(self): reveal_type(self) # Self@Parent return self @classmethod def method2(cls): reveal_type(cls) # Type[Self@Parent] return cls class Child(Parent): ... reveal_type(Child().method1()) # Child reveal_type(Child.method2()) # Type[Child] ``` ### Overloads Some functions or methods can return one of several different types. In cases where the return type depends on the types of the input arguments, it is useful to specify this using a series of `@overload` signatures. When Pyright evaluates a call expression, it determines which overload signature best matches the supplied arguments. [PEP 484](https://www.python.org/dev/peps/pep-0484/#function-method-overloading) introduced the `@overload` decorator and described how it can be used, but the PEP did not specify precisely how a type checker should choose the “best” overload. Pyright uses the following rules. 1. Pyright first filters the list of overloads based on simple “arity” (number of arguments) and keyword argument matching. For example, if one overload requires two positional arguments but only one positional argument is supplied by the caller, that overload is eliminated from consideration. Likewise, if the call includes a keyword argument but no corresponding parameter is included in the overload, it is eliminated from consideration. 2. Pyright next considers the types of the arguments and compares them to the declared types of the corresponding parameters. If the types do not match for a given overload, that overload is eliminated from consideration. Bidirectional type inference is used to determine the types of the argument expressions. 3. If only one overload remains, it is the “winner”. 4. If more than one overload remains, the “winner” is chosen based on the order in which the overloads are declared. In general, the first remaining overload is the “winner”. There are two exceptions to this rule. Exception 1: When an `*args` (unpacked) argument matches a `*args` parameter in one of the overload signatures, this overrides the normal order-based rule. Exception 2: When two or more overloads match because an argument evaluates to `Any` or `Unknown`, the matching overload is ambiguous. In this case, pyright examines the return types of the remaining overloads and eliminates types that are duplicates or are subsumed by (i.e. proper subtypes of) other types in the list. If only one type remains after this coalescing step, that type is used. If more than one type remains after this coalescing step, the type of the call expression evaluates to `Unknown`. For example, if two overloads are matched due to an argument that evaluates to `Any`, and those two overloads have return types of `str` and `LiteralString`, pyright will coalesce this to just `str` because `LiteralString` is a proper subtype of `str`. If the two overloads have return types of `str` and `bytes`, the call expression will evaluate to `Unknown` because `str` and `bytes` have no overlap. 5. If no overloads remain, Pyright considers whether any of the arguments are union types. If so, these union types are expanded into their constituent subtypes, and the entire process of overload matching is repeated with the expanded argument types. If two or more overloads match, the union of their respective return types form the final return type for the call expression. This "union expansion" can result in a combinatoric explosion if many arguments evaluate to union types. For example, if four arguments are present, and they all evaluate to unions that expand to ten subtypes, this could result in 10^4 combinations. Pyright expands unions for arguments left to right and halts expansion when the number of signatures exceeds 64. 6. If no overloads remain and all unions have been expanded, a diagnostic is generated indicating that the supplied arguments are incompatible with all overload signatures. ### Class and Instance Variables Most object-oriented languages clearly differentiate between class variables and instance variables. Python is a bit looser in that it allows an object to overwrite a class variable with an instance variable of the same name. ```python class A: my_var = 0 def my_method(self): self.my_var = "hi!" a = A() print(A.my_var) # Class variable value of 0 print(a.my_var) # Class variable value of 0 A.my_var = 1 print(A.my_var) # Updated class variable value of 1 print(a.my_var) # Updated class variable value of 1 a.my_method() # Writes to the instance variable my_var print(A.my_var) # Class variable value of 1 print(a.my_var) # Instance variable value of "hi!" A.my_var = 2 print(A.my_var) # Updated class variable value of 2 print(a.my_var) # Instance variable value of "hi!" ``` Pyright differentiates between three types of variables: pure class variables, regular class variables, and pure instance variables. #### Pure Class Variables If a class variable is declared with a `ClassVar` annotation as described in [PEP 526](https://peps.python.org/pep-0526/#class-and-instance-variable-annotations), it is considered a “pure class variable” and cannot be overwritten by an instance variable of the same name. ```python from typing import ClassVar class A: x: ClassVar[int] = 0 def instance_method(self): self.x = 1 # Type error: Cannot overwrite class variable @classmethod def class_method(cls): cls.x = 1 a = A() print(A.x) print(a.x) A.x = 1 a.x = 2 # Type error: Cannot overwrite class variable ``` #### Regular Class Variables If a class variable is declared without a `ClassVar` annotation, it can be overwritten by an instance variable of the same name. The declared type of the instance variable is assumed to be the same as the declared type of the class variable. Regular class variables can also be declared within a class method using a `cls` member access expression, but declaring regular class variables within the class body is more common and generally preferred for readability. ```python class A: x: int = 0 y: int def instance_method(self): self.x = 1 self.y = 2 @classmethod def class_method(cls): cls.z: int = 3 A.y = 0 A.z = 0 print(f"{A.x}, {A.y}, {A.z}") # 0, 0, 0 A.class_method() print(f"{A.x}, {A.y}, {A.z}") # 0, 0, 3 a = A() print(f"{a.x}, {a.y}, {a.z}") # 0, 0, 3 a.instance_method() print(f"{a.x}, {a.y}, {a.z}") # 1, 2, 3 a.x = "hi!" # Error: Incompatible type ``` #### Pure Instance Variables If a variable is not declared within the class body but is instead declared within a class method using a `self` member access expression, it is considered a “pure instance variable”. Such variables cannot be accessed through a class reference. ```python class A: def __init__(self): self.x: int = 0 self.y: int print(A.x) # Error: 'x' is not a class variable a = A() print(a.x) a.x = 1 a.y = 2 print(f"{a.x}, {a.y}") # 1, 2 print(a.z) # Error: 'z' is not an known member ``` #### Inheritance of Class and Instance Variables Class and instance variables are inherited from parent classes. If a parent class declares the type of a class or instance variable, a derived class must honor that type when assigning to it. ```python class Parent: x: int | str | None y: int class Child(Parent): x = "hi!" y = None # Error: Incompatible type ``` The derived class can redeclare the type of a class or instance variable. If `reportIncompatibleVariableOverride` is enabled, the redeclared type must be the same as the type declared by the parent class. If the variable is immutable (as in a frozen `dataclass`), it is considered covariant, and it can be redeclared as a subtype of the type declared by the parent class. ```python class Parent: x: int | str | None y: int class Child(Parent): x: int # Type error: 'x' cannot be redeclared with subtype because variable is mutable and therefore invariant y: str # Type error: 'y' cannot be redeclared with an incompatible type ``` If a parent class declares the type of a class or instance variable and a derived class does not redeclare it but does assign a value to it, the declared type is retained from the parent class. It is not overridden by the inferred type of the assignment in the derived class. ```python class Parent: x: object class Child(Parent): x = 3 reveal_type(Parent.x) # object reveal_type(Child.x) # object ``` If neither the parent nor the derived class declare the type of a class or instance variable, the type is inferred within each class. ```python class Parent: x = object() class Child(Parent): x = 3 reveal_type(Parent.x) # object reveal_type(Child.x) # int ``` #### Type Variable Scoping A type variable must be bound to a valid scope (a class, function, or type alias) before it can be used within that scope. Pyright displays the bound scope for a type variable using an `@` symbol. For example, `T@func` means that type variable `T` is bound to function `func`. ```python S = TypeVar("S") T = TypeVar("T") def func(a: T) -> T: b: T = a # T refers to T@func reveal_type(b) # T@func c: S # Error: S has no bound scope in this context return b ``` When a TypeVar or ParamSpec appears within parameter or return type annotations for a function and it is not already bound to an outer scope, it is normally bound to the function. As an exception to this rule, if the TypeVar or ParamSpec appears only within the return type annotation of the function and only within a single Callable in the return type, it is bound to that Callable rather than the function. This allows a function to return a generic Callable. ```python # T is bound to func1 because it appears in a parameter type annotation. def func1(a: T) -> Callable[[T], T]: a: T # OK because T is bound to func1 # T is bound to the return callable rather than func2 because it appears # only within a return Callable. def func2() -> Callable[[T], T]: a: T # Error because T has no bound scope in this context # T is bound to func3 because it appears outside of a Callable. def func3() -> Callable[[T], T] | T: ... # This scoping logic applies also to type aliases used within a return # type annotation. T is bound to the return Callable rather than func4. Transform = Callable[[S], S] def func4() -> Transform[T]: ... ``` ### Type Annotation Comments Versions of Python prior to 3.6 did not support type annotations for variables. Pyright honors type annotations found within a comment at the end of the same line where a variable is assigned. ```python offsets = [] # type: list[int] self._target = 3 # type: int | str ``` Future versions of Python will likely deprecate support for type annotation comments. The “reportTypeCommentUsage” diagnostic will report usage of such comments so they can be replaced with inline type annotations. ### Literal Math Inference When inferring the type of some unary and binary operations that involve operands with literal types, pyright computes the result of operations on the literal values, producing a new literal type in the process. For example: ```python def func(x: Literal[1, 3], y: Literal[4, 7]): z = x + y reveal_type(z) # Literal[5, 8, 7, 10] z = x * y reveal_type(z) # Literal[4, 7, 12, 21] z = (x | y) ^ 1 reveal_type(z) # Literal[4, 6] z = x ** y reveal_type(z) # Literal[1, 81, 2187] ``` Literal math also works on `str` literals. ```python reveal_type("a" + "b") # Literal["ab"] ``` The result of a literal math operation can result in large unions. Pyright limits the number of subtypes in the resulting union to 64. If the union grows beyond that, the corresponding non-literal type is inferred. ```python def func(x: Literal[1, 2, 3, 4, 5]): y = x * x reveal_type(y) # Literal[1, 2, 3, 4, 5, 6, 8, 10, 9, 12, 15, 16, 20, 25] z = y * x reveal_type(z) # int ``` Literal math inference is disabled within loops and lambda expressions. ### Static Conditional Evaluation Pyright performs static evaluation of several conditional expression forms. This includes several forms that are mandated by the [Python typing spec](https://typing.readthedocs.io/en/latest/spec/directives.html#version-and-platform-checking). * `sys.version_info ` * `sys.version_info[0] >= ` * `sys.platform == ` * `os.name == ` * `typing.TYPE_CHECKING` or `typing_extensions.TYPE_CHECKING` * `True` or `False` * An identifier defined with the "defineConstant" configuration option * A `not` unary operator with any of the above forms * An `and` or `or` binary operator with any of the above forms If one of these conditional expressions evaluates statically to false, pyright does not analyze any of the code within it other than checking for and reporting syntax errors. ### Reachability Pyright performs “reachability analysis” to determine whether statements will be executed at runtime. Reachability analysis is based on both non-type and type information. Non-type information includes statements that affect code structure such as `continue`, `raise` and `return`. It also includes conditional statements (`if`, `elif`, or `while`) where the conditional expression is one of these [supported expression forms](#static-conditional-evaluation). Type analysis is not performed on code determined to be unreachable using non-type information. Therefore, language server features like completion suggestions are not available for this code. Here are some examples of code determined to be unreachable using non-type information. ```python from typing import TYPE_CHECKING import sys if False: print('unreachable') if not TYPE_CHECKING: print('unreachable') if sys.version_info < (3, 0): print('unreachable') if sys.platform == 'ENIAC': print('unreachable') def func1(): return print('unreachable') def func2(): raise NotImplemented print('unreachable') ``` Pyright can also detect code that is unreachable based on static type analysis. This analysis is based on the assumption that any provided type annotations are accurate. Here are some examples of code determined to be unreachable using type analysis. ```python from typing import Literal, NoReturn def always_raise() -> NoReturn: raise ValueError def func1(): always_raise() print('unreachable') def func2(x: str): if not isinstance(x, str): print('unreachable') def func3(x: Literal[1, 2]): if x == 1 or x == 2: return print("unreachable") ``` Code that is determined to be unreachable is reported through the use of “tagged hints”. These are special diagnostics that tell a language client to display the code in a visually distinctive manner, typically with a grayed-out appearance. Code determined to be unreachable using non-type information is always reported through this mechanism. Code determined to be unreachable using type analysis is reported only if “enableReachabilityAnalysis” is enabled in the configuration. --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/usage/type-inference.md ## Understanding Type Inference ### Symbols and Scopes In Python, a _symbol_ is any name that is not a keyword. Symbols can represent classes, functions, methods, variables, parameters, modules, type aliases, type variables, etc. Symbols are defined within _scopes_. A scope is associated with a block of code and defines which symbols are visible to that code block. Scopes can be “nested” allowing code to see symbols within its immediate scope and all “outer” scopes. The following constructs within Python define a scope: 1. The “builtins” scope is always present and is always the outermost scope. It is pre-populated by the Python interpreter with symbols like “int” and “list”. 2. The module scope (sometimes called the “global” scope) is defined by the current source code file. 3. Each class defines its own scope. Symbols that represent methods, class variables, or instance variables appear within a class scope. 4. Each function and lambda defines its own scope. The function’s parameters are symbols within its scope, as are any variables defined within the function. 5. List comprehensions define their own scope. ### Type Declarations A symbol can be declared with an explicit type. The “def” and “class” keywords, for example, declare a symbol as a function or a class. Other symbols in Python can be introduced into a scope with no declared type. Newer versions of Python have introduced syntax for declaring the types of input parameters, return parameters, and variables. When a parameter or variable is annotated with a type, the type checker verifies that all values assigned to that parameter or variable conform to that type. Consider the following example: ```python def func1(p1: float, p2: str, p3, **p4) -> None: var1: int = p1 # This is a type violation var2: str = p2 # This is allowed because the types match var2: int # This is an error because it redeclares var2 var3 = p1 # var3 does not have a declared type return var1 # This is a type violation ``` Symbol | Symbol Category | Scope | Declared Type ----------|-----------------|-----------|---------------------------------------------------- func1 | Function | Module | (float, str, Any, dict[str, Any]) -> None p1 | Parameter | func1 | float p2 | Parameter | func1 | str p3 | Parameter | func1 | p4 | Parameter | func1 | var1 | Variable | func1 | int var2 | Variable | func1 | str var3 | Variable | func1 | Note that once a symbol’s type is declared, it cannot be redeclared to a different type. ### Type Inference Some languages require every symbol to be explicitly typed. Python allows a symbol to be bound to different values at runtime, so its type can change over time. A symbol’s type doesn’t need to be declared statically. When Pyright encounters a symbol with no type declaration, it attempts to _infer_ the type based on the values assigned to it. As we will see below, type inference cannot always determine the correct (intended) type, so type annotations are still required in some cases. Furthermore, type inference can require significant computation, so it is much less efficient than when type annotations are provided. ### “Unknown” Type If a symbol’s type cannot be inferred, Pyright sets its type to “Unknown”, which is a special form of “Any”. The “Unknown” type allows Pyright to optionally warn when types are not declared and cannot be inferred, thus leaving potential “blind spots” in type checking. #### Single-Assignment Type Inference The simplest form of type inference is one that involves a single assignment to a symbol. The inferred type comes from the type of the source expression. Examples include: ```python var1 = 3 # Inferred type is int var2 = "hi" # Inferred type is str var3 = list() # Inferred type is list[Unknown] var4 = [3, 4] # Inferred type is list[int] for var5 in [3, 4]: ... # Inferred type is int var6 = [p for p in [1, 2, 3]] # Inferred type is list[int] ``` #### Multi-Assignment Type Inference When a symbol is assigned values in multiple places within the code, those values may have different types. The inferred type of the variable is the union of all such types. ```python # In this example, symbol var1 has an inferred type of `str | int`. class Foo: def __init__(self): self.var1 = "" def do_something(self, val: int): self.var1 = val # In this example, symbol var2 has an inferred type of `Foo | None`. if __debug__: var2 = None else: var2 = Foo() ``` #### Ambiguous Type Inference In some cases, an expression’s type is ambiguous. For example, what is the type of the expression `[]`? Is it `list[None]`, `list[int]`, `list[Any]`, `Sequence[Any]`, `Iterable[Any]`? These ambiguities can lead to unintended type violations. Pyright uses several techniques for reducing these ambiguities based on contextual information. In the absence of contextual information, heuristics are used. #### Bidirectional Type Inference (Expected Types) One powerful technique Pyright uses to eliminate type inference ambiguities is _bidirectional inference_. This technique makes use of an “expected type”. As we saw above, the type of the expression `[]` is ambiguous, but if this expression is passed as an argument to a function, and the corresponding parameter is annotated with the type `list[int]`, Pyright can now assume that the type of `[]` in this context must be `list[int]`. Ambiguity eliminated! This technique is called “bidirectional inference” because type inference for an assignment normally proceeds by first determining the type of the right-hand side (RHS) of the assignment, which then informs the type of the left-hand side (LHS) of the assignment. With bidirectional inference, if the LHS of an assignment has a declared type, it can influence the inferred type of the RHS. Let’s look at a few examples: ```python var1 = [] # Type of RHS is ambiguous var2: list[int] = [] # Type of LHS now makes type of RHS unambiguous var3 = [4] # Type is assumed to be list[int] var4: list[float] = [4] # Type of RHS is now list[float] var5 = (3,) # Type is assumed to be tuple[Literal[3]] var6: tuple[float, ...] = (3,) # Type of RHS is now tuple[float, ...] ``` #### Empty List and Dictionary Type Inference It is common to initialize a local variable or instance variable to an empty list (`[]`) or empty dictionary (`{}`) on one code path but initialize it to a non-empty list or dictionary on other code paths. In such cases, Pyright will infer the type based on the non-empty list or dictionary and suppress errors about a “partially unknown type”. ```python if some_condition: my_list = [] else: my_list = ["a", "b"] reveal_type(my_list) # list[str] ``` #### Return Type Inference As with variable assignments, function return types can be inferred from the `return` statements found within that function. The returned type is assumed to be the union of all types returned from all `return` statements. If a `return` statement is not followed by an expression, it is assumed to return `None`. Likewise, if the function does not end in a `return` statement, and the end of the function is reachable, an implicit `return None` is assumed. ```python # This function has two explicit return statements and one implicit # return (at the end). It does not have a declared return type, # so Pyright infers its return type based on the return expressions. # In this case, the inferred return type is `str | bool | None`. def func1(val: int): if val > 3: return "" elif val < 1: return True ``` #### `Never` return type If there is no code path that returns from a function (e.g. all code paths raise an exception), Pyright infers a return type of `Never`. As an exception to this rule, if the function is decorated with `@abstractmethod`, the return type is not inferred as `Never` even if there is no return. This accommodates a common practice where an abstract method is implemented with a `raise` statement that raises an exception of type `NotImplementedError`. ```python class Foo: # The inferred return type is Never. def method1(self): raise Exception() # The inferred return type is Unknown. @abstractmethod def method2(self): raise NotImplementedError() ``` ##### Difference between `Never` and `NoReturn` both `Never` and `NoReturn` mean the exact same thing. `Never` was added in python 3.11 when they realized that it doesn't just mean "a function that doesn't return a value", because it can be used in any situation to refer to narrowest possible type, eg. to validate that all possible types have been narrowed out of a union: ```py from typing import assert_never def foo(value: int | str): if isinstance(value, int): ... elif isinstance(value, str): ... else: assert_never(value) # will report a type error if any type is unaccounted for ``` this is because `assert_never` takes `Never` as a parameter: ```py def assert_never(arg: Never, /) -> Never: ... ``` the name `NoReturn` makes no sense for use cases like this, which is why the `Never` type is preferred. a future release of basedpyright will mark `NoReturn` as deprecated in favor of `Never` for this reason. [see this comment for more information](https://github.com/DetachHead/basedpyright/issues/356#issuecomment-2224318021) #### Generator return types Pyright can infer the return type for a generator function from the `yield` statements contained within that function. #### Call-site Return Type Inference It is common for input parameters to be unannotated. This can make it difficult for Pyright to infer the correct return type for a function. For example: ```python # The return type of this function cannot be fully inferred based # on the information provided because the types of parameters # a and b are unknown. In this case, the inferred return # type is `Unknown | None`. def func1(a, b, c): if c: return a elif c > 3: return b else: return None ``` In cases where all parameters are unannotated, Pyright uses a technique called _call-site return type inference_. It performs type inference using the the types of arguments passed to the function in a call expression. If the unannotated function calls other functions, call-site return type inference can be used recursively. Pyright limits this recursion to a small number for practical performance reasons. ```python def func2(p_int: int, p_str: str, p_flt: float): # The type of var1 is inferred to be `int | None` based # on call-site return type inference. var1 = func1(p_int, p_int, p_int) # The type of var2 is inferred to be `str | float | None`. var2 = func1(p_str, p_flt, p_int) ``` #### Parameter Type Inference Input parameters for functions and methods typically require type annotations. There are several cases where Pyright may be able to infer a parameter’s type if it is unannotated. For instance methods, the first parameter (named `self` by convention) is inferred to be type `Self`. For class methods, the first parameter (named `cls` by convention) is inferred to be type `type[Self]`. For other unannotated parameters within a method, Pyright looks for a method of the same name implemented in a base class. If the corresponding method in the base class has the same signature (the same number of parameters with the same names), no overloads, and annotated parameter types, the type annotation from this method is “inherited” for the corresponding parameter in the child class method. ```python class Parent: def method1(self, a: int, b: str) -> float: ... class Child(Parent): def method1(self, a, b): return a reveal_type(Child.method1) # (self: Child, a: int, b: str) -> int ``` When parameter types are inherited from a base class method, the return type is not inherited. Instead, normal return type inference techniques are used. If the type of an unannotated parameter cannot be inferred using any of the above techniques and the parameter has a default argument expression associated with it, the parameter type is inferred from the default argument type. If the default argument is `None`, the inferred type is `Unknown | None`. ```python def func(a, b=0, c=None): pass reveal_type(func) # (a: Unknown, b: int, c: Unknown | None) -> None ``` This inference technique also applies to lambdas whose input parameters include default arguments. ```python cb = lambda x = "": x reveal_type(cb) # (x: str = "" -> str) ``` #### Literals Python 3.8 introduced support for _literal types_. This allows a type checker like Pyright to track specific literal values of str, bytes, int, bool, and enum values. As with other types, literal types can be declared. ```python # This function is allowed to return only values 1, 2 or 3. def func1() -> Literal[1, 2, 3]: ... # This function must be passed one of three specific string values. def func2(mode: Literal["r", "w", "rw"]) -> None: ... ``` When Pyright is performing type inference, it generally does not infer literal types. Consider the following example: ```python # If Pyright inferred the type of var1 to be list[Literal[4]], # any attempt to append a value other than 4 to this list would # generate an error. Pyright therefore infers the broader # type list[int]. var1 = [4] ``` #### Tuple Expressions When inferring the type of a tuple expression (in the absence of bidirectional inference hints), Pyright assumes that the tuple has a fixed length, and each tuple element is typed as specifically as possible. ```python # The inferred type is tuple[Literal[1], Literal["a"], Literal[True]]. var1 = (1, "a", True) def func1(a: int): # The inferred type is tuple[int, int]. var2 = (a, a) # If you want the type to be tuple[int, ...] # (i.e. a homogeneous tuple of indeterminate length), # use a type annotation. var3: tuple[int, ...] = (a, a) ``` Because tuples are typed as specifically as possible, literal types are normally retained. However, as an exception to this inference rule, if the tuple expression is nested within another tuple, set, list or dictionary expression, literal types are not retained. This is done to avoid the inference of complex types (e.g. unions with many subtypes) when evaluating tuple statements with many entries. ```python # The inferred type is list[tuple[int, str, bool]]. var4 = [(1, "a", True), (2, "b", False), (3, "c", False)] ``` #### List Expressions When inferring the type of a list expression (in the absence of bidirectional inference hints), Pyright uses the following heuristics: 1. If the list is empty (`[]`), assume `list[Unknown]` (unless a known list type is assigned to the same variable along another code path). 2. If the list contains at least one element and all elements are the same type T, infer the type `list[T]`. 3. If the list contains multiple elements that are of different types, the behavior depends on the `strictListInference` configuration setting. By default this setting is off. * If `strictListInference` is off, infer `list[Unknown]`. * Otherwise use the union of all element types and infer `list[Union[(elements)]]`. These heuristics can be overridden through the use of bidirectional inference hints (e.g. by providing a declared type for the target of the assignment expression). ```python var1 = [] # Infer list[Unknown] var2 = [1, 2] # Infer list[int] # Type depends on strictListInference config setting var3 = [1, 3.4] # Infer list[Unknown] (off) var3 = [1, 3.4] # Infer list[int | float] (on) var4: list[float] = [1, 3.4] # Infer list[float] ``` #### Set Expressions When inferring the type of a set expression (in the absence of bidirectional inference hints), Pyright uses the following heuristics: 1. If the set contains at least one element and all elements are the same type T, infer the type `set[T]`. 2. If the set contains multiple elements that are of different types, the behavior depends on the `strictSetInference` configuration setting. By default this setting is off. * If `strictSetInference` is off, infer `set[Unknown]`. * Otherwise use the union of all element types and infer `set[Union[(elements)]]`. These heuristics can be overridden through the use of bidirectional inference hints (e.g. by providing a declared type for the target of the assignment expression). ```python var1 = {1, 2} # Infer set[int] # Type depends on strictSetInference config setting var2 = {1, 3.4} # Infer set[Unknown] (off) var2 = {1, 3.4} # Infer set[int | float] (on) var3: set[float] = {1, 3.4} # Infer set[float] ``` #### Dictionary Expressions When inferring the type of a dictionary expression (in the absence of bidirectional inference hints), Pyright uses the following heuristics: 1. If the dict is empty (`{}`), assume `dict[Unknown, Unknown]`. 2. If the dict contains at least one element and all keys are the same type K and all values are the same type V, infer the type `dict[K, V]`. 3. If the dict contains multiple elements where the keys or values differ in type, the behavior depends on the `strictDictionaryInference` configuration setting. By default this setting is off. * If `strictDictionaryInference` is off, infer `dict[Unknown, Unknown]`. * Otherwise use the union of all key and value types `dict[Union[(keys)], Union[(values)]]`. ```python var1 = {} # Infer dict[Unknown, Unknown] var2 = {1: ""} # Infer dict[int, str] # Type depends on strictDictionaryInference config setting var3 = {"a": 3, "b": 3.4} # Infer dict[str, Unknown] (off) var3 = {"a": 3, "b": 3.4} # Infer dict[str, int | float] (on) var4: dict[str, float] = {"a": 3, "b": 3.4} ``` #### Lambdas Lambdas present a particular challenge for a Python type checker because there is no provision in the Python syntax for annotating the types of a lambda’s input parameters. The types of these parameters must therefore be inferred based on context using bidirectional type inference. Absent this context, a lambda’s input parameters (and often its return type) will be unknown. ```python # The type of var1 is (a: Unknown, b: Unknown) -> Unknown. var1 = lambda a, b: a + b # This function takes a comparison function callback. def float_sort(list: list[float], comp: Callable[[float, float], bool]): ... # In this example, the types of the lambda’s input parameters # a and b can be inferred to be float because the float_sort # function expects a callback that accepts two floats as # inputs. float_sort([2, 1.3], lambda a, b: False if a < b else True) ``` --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/usage/type-stubs.md ## Type Stub Files Type stubs are “.pyi” files that specify the public interface for a library. They use a variant of the Python syntax that allows for “...” to be used in place of any implementation details. Type stubs define the public contract for the library. ### Importance of Type Stub Files Regardless of the search path, Pyright always attempts to resolve an import with a type stub (“.pyi”) file before falling back to a python source (“.py”) file. If a type stub cannot be located for an external import, Pyright will try to use inline type information if the module is part of a package that contains a “py.typed” file (defined in [PEP 561](https://www.python.org/dev/peps/pep-0561/)). If the module is part of a package that doesn’t contain a “py.typed” file, Pyright will treat all of the symbols imported from these modules as having type “Unknown”, and wildcard imports (of the form `from foo import *`) will not populate the module’s namespace with specific symbol names. Why does Pyright not attempt (by default) to determine types from imported python sources? There are several reasons. 1. Imported libraries can be quite large, so analyzing them can require significant time and computation. 2. Some libraries are thin shims on top of native (C++) libraries. Little or no type information would be inferable in these cases. 3. Some libraries override Python’s default loader logic. Static analysis is not possible in these cases. 4. Type information inferred from source files is often of low value because many types cannot be inferred correctly. Even if concrete types can be inferred, generic type definitions cannot. 5. Type analysis would expose all symbols from an imported module, even those that are not meant to be exposed by the author. Unlike many other languages, Python offers no way of differentiating between a symbol that is meant to be exported and one that isn’t. If you’re serious about static type checking for your Python source base, it’s highly recommended that you consume “py.typed” packages or use type stub files for all external imports. If you are unable to find a type stub for a particular library, the recommended approach is to create a custom type stub file that defines the portion of that module’s interface used by your code. More library maintainers have started to provide inlined types or type stub files. ### Generating Type Stubs If you use only a few classes, methods or functions within a library, writing a type stub file by hand is feasible. For large libraries, this can become tedious and error-prone. Pyright can generate “draft” versions of type stub files for you. To generate a type stub file from within VS Code, enable the reportMissingTypeStubs” setting in your pyrightconfig.json file or by adding a comment `# pyright: reportMissingTypeStubs=true` to individual source files. Make sure you have the target library installed in the python environment that pyright is configured to use for import resolution. Optionally specify a “stubPath” in your pyrightconfig.json file. This is where pyright will generate your type stub files. By default, the stubPath is set to "./typings". #### Generating Type Stubs in your IDE If “reportMissingTypeStubs” is enabled, pyright will highlight any imports that have no type stub. Hover over the error message, and you will see a “Quick Fix” link. Clicking on this link will reveal a popup menu item titled “Create Type Stub For XXX”. The example below shows a missing typestub for the `django` library. ![Pyright](CreateTypeStub1.png) Click on the menu item to create the type stub. Depending on the size of the library, it may take pyright tens of seconds to analyze the library and generate type stub files. Once complete, you should see a message in VS Code indicating success or failure. ![Pyright](CreateTypeStub2.png) !!! note these instructions are specific to VScode, but this functionality is also available in [other supported editors](../installation/ides.md) as well, since this functionality is built into [the language server](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#textDocument_codeAction). #### Generating Type Stubs from Command Line The command-line version of pyright can also be used to generate type stubs. As with the IDE version, it must be run within the context of your configured project. Then type `pyright --createstub [import-name]`. For example: `pyright --createstub django` #### Cleaning Up Generated Type Stubs Pyright can give you a head start by creating type stubs, but you will typically need to clean up the first draft, fixing various errors and omissions that pyright was not able to infer from the original library code. A few common situations that need to be cleaned up: 1. When generating a “.pyi” file, pyright removes any imports that are not referenced. Sometimes libraries import symbols that are meant to be simply re-exported from a module even though they are not referenced internally to that module. In such cases, you will need to manually add back these imports. Pyright does not perform this import culling in `__init__.pyi` files because this re-export technique is especially common in such files. 2. Some libraries attempt to import modules within a try statement. These constructs don’t work well in type stub files because they cannot be evaluated statically. Pyright omits any try statements when creating “.pyi” files, so you may need to add back in these import statements. 3. Decorator functions are especially problematic for static type analyzers. Unless properly typed, they completely hide the signature of any class or function they are applied to. For this reason, it is highly recommended that you enable the “reportUntypedFunctionDecorator” and “reportUntypedClassDecorator” switches in pyrightconfig.json. Most decorators simply return the same function they are passed. Those can easily be annotated with a TypeVar like this: ```python from typing import Any, Callable, TypeVar _FuncT = TypeVar('_FuncT', bound=Callable[..., Any]) def my_decorator(*args, **kw) -> Callable[[_FuncT], _FuncT]: ... ``` --- # Source: https://github.com/DetachHead/basedpyright/blob/main/docs/usage/typed-libraries.md ## Typing Guidance for Python Libraries Much of Python’s popularity can be attributed to the rich collection of Python libraries available to developers. Authors of these libraries play an important role in improving the experience for Python developers. This document provides some recommendations and guidance for Python library authors. These recommendations are intended to provide the following benefits: 1. Consumers of libraries should have a great coding experience with fast and accurate completion suggestions, class and function documentation, signature help (including parameter default values), hover text, and auto-imports. This should happen by default without needing to download extra packages and without any special configuration. These features should be consistent across the Python ecosystem regardless of a developer’s choice of editor, IDE, notebook environment, etc. 2. Consumers of libraries should be able to rely on complete and accurate type information so static type checkers can detect and report type inconsistencies and other violations of the interface contract. 3. Library authors should be able to specify a well-defined interface contract that is enforced by tools. This allows a library implementation to evolve and improve without breaking consumers of the library. 4. Library authors should have the benefits of static type checking to produce high-quality, bug-free implementations. ### Inlined Type Annotations and Type Stubs [PEP 561](https://www.python.org/dev/peps/pep-0561/) documents several ways type information can be delivered for a library: inlined type annotations, type stub files included in the package, a separate companion type stub package, and type stubs in the typeshed repository. Some of these options fall short on delivering the benefits above. We therefore provide the following more specific guidance to library authors. All libraries should include inlined type annotations for the functions, classes, methods, and constants that comprise the public interface for the library. Inlined type annotations should be included directly within the source code that ships with the package. Of the options listed in PEP 561, inlined type annotations offer the most benefits. They typically require the least effort to add and maintain, they are always consistent with the implementation, and docstrings and default parameter values are readily available, allowing language servers to enhance the development experience. There are cases where inlined type annotations are not possible — most notably when a library’s exposed functionality is implemented in a language other than Python. Libraries that expose symbols implemented in languages other than Python should include stub (“.pyi”) files that describe the types for those symbols. These stubs should also contain docstrings and default parameter values. In many existing type stubs (such as those found in typeshed), default parameter values are replaced with with “...” and all docstrings are removed. We recommend that default values and docstrings remain within the type stub file so language servers can display this information to developers. ### Library Interface [PEP 561](https://www.python.org/dev/peps/pep-0561/) indicates that a “py.typed” marker file must be included in the package if the author wishes to support type checking of their code. If a “py.typed” module is present, a type checker will treat all modules within that package (i.e. all files that end in “.py” or “.pyi”) as importable unless the module is marked private. There are two ways to mark a module private: (1) the module's filename begins with an underscore; (2) the module is inside a sub-package marked private. For example: * foo._bar (_bar is private) * foo._bar.baz (_bar and baz are private) * foo._bar.baz.bop (_bar, baz, and bop are private) Each module exposes a set of symbols. Some of these symbols are considered “private” — implementation details that are not part of the library’s interface. Type checkers like pyright use the following rules to determine which symbols are visible outside of the package. * Symbols whose names begin with an underscore (but are not dunder names) are considered private. * Imported symbols are considered private by default. If they use the “import A as A” (a redundant module alias), “from X import A as A” (a redundant symbol alias), or “from . import A” forms, symbol “A” is not private unless the name begins with an underscore. If a file `__init__.py` uses the form “from .A import X”, symbol “A” is not private unless the name begins with an underscore (but “X” is still private). If a wildcard import (of the form “from X import *”) is used, all symbols referenced by the wildcard are not private. * A module can expose an `__all__` symbol at the module level that provides a list of names that are considered part of the interface. The `__all__` symbol indicates which symbols are included in a wildcard import. All symbols included in the `__all__` list are considered public even if the other rules above would otherwise indicate that they were private. For example, this allows symbols whose names begin with an underscore to be included in the interface. * Local variables within a function (including nested functions) are always considered private. The following idioms are supported for defining the values contained within `__all__`. These restrictions allow type checkers to statically determine the value of `__all__`. * `__all__ = ('a', 'b')` * `__all__ = ['a', 'b']` * `__all__ += ['a', 'b']` * `__all__ += submodule.__all__` * `__all__.extend(['a', 'b'])` * `__all__.extend(submodule.__all__)` * `__all__.append('a')` * `__all__.remove('a')` ### Type Completeness A “py.typed” library is said to be “type complete” if all of the symbols that comprise its interface have type annotations that refer to types that are fully known. Private symbols are exempt. A “known type” is defined as follows: Classes: * All class variables, instance variables, and methods that are “visible” (not overridden) are annotated and refer to known types * If a class is a subclass of a generic class, type arguments are provided for each generic type parameter, and these type arguments are known types Functions and Methods: * All input parameters have type annotations that refer to known types * The return parameter is annotated and refers to a known type * The result of applying one or more decorators results in a known type Type Aliases: * All of the types referenced by the type alias are known Variables: * All variables have type annotations that refer to known types Type annotations can be omitted in a few specific cases where the type is obvious from the context: * Constants that are assigned simple literal values (e.g. `RED = '#F00'` or `MAX_TIMEOUT = 50` or `room_temperature: Final = 20`). A constant is a symbol that is assigned only once and is either annotated with `Final` or is named in all-caps. A constant that is not assigned a simple literal value requires explicit annotations, preferably with a `Final` annotation (e.g. `WOODWINDS: Final[list[str]] = ['Oboe', 'Bassoon']`). * Enum values within an Enum class do not require annotations because they take on the type of the Enum class. * Type aliases do not require annotations. A type alias is a symbol that is defined at a module level with a single assignment where the assigned value is an instantiable type, as opposed to a class instance (e.g. `Foo = Callable[[Literal["a", "b"]], int | str]` or `Bar = MyGenericClass[int] | None`). * The “self” parameter in an instance method and the “cls” parameter in a class method do not require an explicit annotation. * The return type for an `__init__` method does not need to be specified, since it is always `None`. * The following module-level symbols do not require type annotations: `__all__`,`__author__`, `__copyright__`, `__email__`, `__license__`, `__title__`, `__uri__`, `__version__`. * The following class-level symbols do not require type annotations: `__class__`, `__dict__`, `__doc__`, `__module__`, `__slots__`. * A variable is assigned in only one location using a simple assignment expression and the right-hand side of the assignment is a literal value (e.g. `1`, `3.14`, `"hi"`, or `MyEnum.Value`) or an identifier that has a known type that doesn't depend on type narrowing logic. #### Ambiguous Types When a symbol is missing a type annotation, a type checker may be able to infer its type based on contextual information. However, type inference rules are not standardized and differ between type checkers. A symbol is said to have an “ambiguous type” if its type may be inferred differently between different Python type checkers. This can lead to a bad experience for consumers of the library. Ambiguous types can be avoided by providing explicit type annotations. #### Examples of known, ambiguous and unknown types ```python # Variable with known type (unambiguous because it uses a literal assignment) a = 3 # Variable with ambiguous type a = [3, 4, 5] # Variable with known (declared) type a: list[int] = [3, 4, 5] # Type alias with partially unknown type (because type # arguments are missing for list and dict) DictOrList = list | dict # Type alias with known type DictOrList = list[Any] | dict[str, Any] # Generic type alias with known type _T = TypeVar("_T") DictOrList = list[_T] | dict[str, _T] # Function with known type def func(a: int | None, b: dict[str, float] = {}) -> None: pass # Function with partially unknown type (because type annotations # are missing for input parameters and return type) def func(a, b): pass # Function with partially unknown type (because of missing # type args on dict) def func(a: int, b: dict) -> None: pass # Function with partially unknown type (because return type # annotation is missing) def func(a: int, b: dict[str, float]): pass # Decorator with partially unknown type (because type annotations # are missing for input parameters and return type) def my_decorator(func): return func # Function with partially unknown type (because type is obscured # by untyped decorator) @my_decorator def func(a: int) -> str: pass # Class with known type class MyClass: height: float = 2.0 def __init__(self, name: str, age: int): self.age: int = age @property def name(self) -> str: ... # Class with partially unknown type class MyClass: # Missing type annotation for class variable height = None # Missing input parameter annotations def __init__(self, name, age): # Missing type annotation for instance variable self.age = age # Missing return type annotation @property def name(self): ... # Class with partially unknown type class BaseClass1: # Missing type annotation height: = 2.0 # Missing type annotation def get_stuff(self): ... # Class with known type (because it overrides all symbols # exposed by BaseClass that have incomplete types) class DerivedClass1(BaseClass1): height: float def get_stuff(self) -> str: ... # Class with known type class BaseClass2: height: float = 2.0 # Class with ambiguous type class DerivedClass2(BaseClass2): # Missing type annotation, could be inferred as float or int height = 1 # Class with partially unknown type because base class # (dict) is generic, and type arguments are not specified. class DictSubclass(dict): pass ``` ### Verifying Type Completeness Pyright provides a feature that allows library authors to verify type completeness for a “py.typed” package. To use this feature, create a clean Python environment and install your package along with all of the other dependent packages. Run the CLI version of pyright with the `--verifytypes` option. `pyright --verifytypes ` Pyright will analyze the library, identify all symbols that comprise the interface to the library and emit errors for any symbols whose types are ambiguous or unknown. It also produces a “type completeness score” which is the percentage of symbols with known types. To see additional details (including a full list of symbols in the library), append the `--verbose` option. The `--verifytypes` option can be combined with `--outputjson` to emit the results in a form that can be consumed by other tools. The `--verifytypes` feature can be integrated into a continuous integration (CI) system to verify that a library remains “type complete”. If the `--verifytypes` option is combined with `--ignoreexternal`, any incomplete types that are imported from other external packages are ignored. This allows library authors to focus on adding type annotations for the code that is directly under their control. #### Improving Type Completeness Here are some tips for increasing the type completeness score for your library: * If your package includes tests or sample code, consider removing them from the distribution. If there is good reason to include them, consider placing them in a directory that begins with an underscore so they are not considered part of your library’s interface. * If your package includes submodules that are meant to be implementation details, rename those files to begin with an underscore. * If a symbol is not intended to be part of the library’s interface and is considered an implementation detail, rename it such that it begins with an underscore. It will then be considered private and excluded from the type completeness check. * If your package exposes types from other libraries, work with the maintainers of these other libraries to achieve type completeness. ### Best Practices for Inlined Types #### Wide vs. Narrow Types In type theory, when comparing two types that are related to each other, the “wider” type is the one that is more general, and the “narrower” type is more specific. For example, `Sequence[str]` is a wider type than `list[str]` because all `list` objects are also `Sequence` objects, but the converse is not true. A subclass is narrower than a class it derives from. A union of types is wider than the individual types that comprise the union. In general, a function input parameter should be annotated with the widest possible type supported by the implementation. For example, if the implementation requires the caller to provide an iterable collection of strings, the parameter should be annotated as `Iterable[str]`, not as `list[str]`. The latter type is narrower than necessary, so if a user attempts to pass a tuple of strings (which is supported by the implementation), a type checker will complain about a type incompatibility. As a specific application of the “use the widest type possible” rule, libraries should generally use immutable forms of container types instead of mutable forms (unless the function needs to modify the container). Use `Sequence` rather than `list`, `Mapping` rather than `dict`, etc. Immutable containers allow for more flexibility because their type parameters are covariant rather than invariant. A parameter that is typed as `Sequence[str | int]` can accept a `list[str | int]`, `list[int]`, `Sequence[str]`, and a `Sequence[int]`. But a parameter typed as `list[str | int]` is much more restrictive and accepts only a `list[str | int]`. #### Overloads If a function or method can return multiple different types and those types can be determined based on the presence or types of certain parameters, use the `@overload` mechanism defined in [PEP 484](https://www.python.org/dev/peps/pep-0484/#id45). When overloads are used within a “.py” file, they must appear prior to the function implementation, which should not have an `@overload` decorator. #### Keyword-only Parameters If a function or method is intended to take parameters that are specified only by name, use the keyword-only separator ("*"). ```python def create_user(age: int, *, dob: date | None = None): ... ``` #### Positional-only Parameters If a function or method is intended to take parameters that are specified only by position, use the positional-only separator ("/") as documented in [PEP 570](https://peps.python.org/pep-0570/). If your library needs to run on versions of Python prior to 3.8, you can alternatively name the positional-only parameters with an identifier that begins with a double underscore. ```python def compare_values(value1: T, value2: T, /) -> bool: ... def compare_values(__value1: T, __value2: T) -> bool: ... ``` ### Annotating Decorators Decorators modify the behavior of a class or a function. Providing annotations for decorators is straightforward if the decorator retains the original signature of the decorated function. ```python _F = TypeVar("_F", bound=Callable[..., Any]) def simple_decorator(_func: _F) -> _F: """ Simple decorators are invoked without parentheses like this: @simple_decorator def my_function(): ... """ ... def complex_decorator(*, mode: str) -> Callable[[_F], _F]: """ Complex decorators are invoked with arguments like this: @complex_decorator(mode="easy") def my_function(): ... """ ... ``` Decorators that mutate the signature of the decorated function present challenges for type annotations. The `ParamSpec` and `Concatenate` mechanisms described in [PEP 612](https://www.python.org/dev/peps/pep-0612/) provide some help here, but these are available only in Python 3.10 and newer. More complex signature mutations may require type annotations that erase the original signature, thus blinding type checkers and other tools that provide signature assistance. As such, library authors are discouraged from creating decorators that mutate function signatures in this manner. #### Generic Classes and Functions Classes and functions that can operate in a generic manner on various types should declare themselves as generic using the mechanisms described in [PEP 484](https://www.python.org/dev/peps/pep-0484/). This includes the use of `TypeVar` symbols. Typically, a `TypeVar` should be private to the file that declares it, and should therefore begin with an underscore. #### Type Aliases Type aliases are symbols that refer to other types. Generic type aliases (those that refer to unspecialized generic classes) are supported by most type checkers. Pyright also provides support for recursive type aliases. [PEP 613](https://www.python.org/dev/peps/pep-0613/) provides a way to explicitly designate a symbol as a type alias using the new TypeAlias annotation. ```python # Simple type alias FamilyPet = Cat | Dog | GoldFish # Generic type alias ListOrTuple = list[_T] | tuple[_T, ...] # Recursive type alias TreeNode = LeafNode | list["TreeNode"] # Explicit type alias using PEP 613 syntax StrOrInt: TypeAlias = str | int ``` #### Abstract Classes and Methods Classes that must be subclassed should derive from `ABC`, and methods or properties that must be overridden should be decorated with the `@abstractmethod` decorator. This allows type checkers to validate that the required methods have been overridden and provide developers with useful error messages when they are not. It is customary to implement an abstract method by raising a `NotImplementedError` exception or subclass thereof. ```python from abc import ABC, abstractmethod class Hashable(ABC): @property @abstractmethod def hash_value(self) -> int: """Subclasses must override""" raise NotImplementedError() @abstractmethod def print(self) -> str: """Subclasses must override""" raise NotImplementedError() ``` #### Final Classes and Methods Classes that are not intended to be subclassed should be decorated as `@final` as described in [PEP 591](https://www.python.org/dev/peps/pep-0591/). The same decorator can also be used to specify methods that cannot be overridden by subclasses. #### Literals Type annotations should make use of the Literal type where appropriate, as described in [PEP 586](https://www.python.org/dev/peps/pep-0586/). Literals allow for more type specificity than their non-literal counterparts. #### Constants Constant values (those that are read-only) can be specified using the Final annotation as described in [PEP 591](https://www.python.org/dev/peps/pep-0591/). Type checkers will also typically treat variables that are named using all upper-case characters as constants. In both cases, it is OK to omit the declared type of a constant if it is assigned a literal str, int, float, bool or None value. In such cases, the type inference rules are clear and unambiguous, and adding a literal type annotation would be redundant. ```python # All-caps constant with inferred type COLOR_FORMAT_RGB = "rgb" # All-caps constant with explicit type COLOR_FORMAT_RGB: Literal["rgb"] = "rgb" LATEST_VERSION: tuple[int, int] = (4, 5) # Final variable with inferred type ColorFormatRgb: Final = "rgb" # Final variable with explicit type ColorFormatRgb: Final[Literal["rgb"]] = "rgb" LATEST_VERSION: Final[tuple[int, int]] = (4, 5) ``` #### Typed Dictionaries, Data Classes, and Named Tuples If a library runs only on newer versions of Python, it can use some of the new type-friendly classes. NamedTuple (described in [PEP 484](https://www.python.org/dev/peps/pep-0484/)) is preferred over namedtuple. Data classes (described in [PEP 557](https://www.python.org/dev/peps/pep-0557/)) are preferred over untyped dictionaries. TypedDict (described in [PEP 589](https://www.python.org/dev/peps/pep-0589/)) is preferred over untyped dictionaries. ### Compatibility with Older Python Versions Each new version of Python from 3.5 onward has introduced new typing constructs. This presents a challenge for library authors who want to maintain runtime compatibility with older versions of Python. This section documents several techniques that can be used to add types while maintaining backward compatibility. #### Quoted Annotations Type annotations for variables, parameters, and return types can be placed in quotes. The Python interpreter will then ignore them, whereas a type checker will interpret them as type annotations. ```python # Older versions of Python do not support subscripting # for the OrderedDict type, so the annotation must be # enclosed in quotes. def get_config(self) -> "OrderedDict[str, str]": return self._config ``` ### Type Comment Annotations Python 3.0 introduced syntax for parameter and return type annotations, as specified in [PEP 484](https://www.python.org/dev/peps/pep-0484/). Python 3.6 introduced support for variable type annotations, as specified in [PEP 526](https://www.python.org/dev/peps/pep-0526/). If you need to support older versions of Python, type annotations can still be provided as “type comments”. These comments take the form `# type: `. ```python class Foo: # Variable type comments go at the end of the line # where the variable is assigned. timeout = None # type: int | None # Function type comments can be specified on the # line after the function signature. def send_message(self, name, length): # type: (str, int) -> None ... # Function type comments can also specify the type # of each parameter on its own line. def receive_message( self, name, # type: str length # type: int ): # type: () -> Message ... ``` #### typing_extensions New type features that require runtime support are typically included in the stdlib `typing` module. Where possible, these new features are back-ported to a runtime library called `typing_extensions` that works with older Python runtimes. #### TYPE_CHECKING The `typing` module exposes a variable called `TYPE_CHECKING` which has a value of False within the Python runtime but a value of True when the type checker is performing its analysis. This allows type checking statements to be conditionalized. Care should be taken when using `TYPE_CHECKING` because behavioral changes between type checking and runtime could mask problems that the type checker would otherwise catch. ### Non-Standard Type Behaviors Type annotations provide a way to annotate typical type behaviors, but some classes implement specialized, non-standard behaviors that cannot be described using standard type annotations. For now, such types need to be annotated as Any, which is unfortunate because the benefits of static typing are lost. ### Docstrings It is recommended that docstrings be provided for all classes, functions, and methods in the interface. They should be formatted according to [PEP 257](https://www.python.org/dev/peps/pep-0257/). There is currently no single agreed-upon standard for function and method docstrings, but several common variants have emerged. We recommend using one of these variants.