# Apscheduler > .. autoclass:: apscheduler.Task --- # Source: https://github.com/agronholm/apscheduler/blob/master/docs/api.rst API reference ============= Data structures --------------- .. autoclass:: apscheduler.Task .. autoclass:: apscheduler.TaskDefaults .. autoclass:: apscheduler.Schedule .. autoclass:: apscheduler.ScheduleResult .. autoclass:: apscheduler.Job .. autoclass:: apscheduler.JobResult Decorators ---------- .. autodecorator:: apscheduler.task Schedulers ---------- .. autoclass:: apscheduler.Scheduler .. autoclass:: apscheduler.AsyncScheduler Job executors ------------- .. autoclass:: apscheduler.abc.JobExecutor .. autoclass:: apscheduler.executors.async_.AsyncJobExecutor .. autoclass:: apscheduler.executors.subprocess.ProcessPoolJobExecutor .. autoclass:: apscheduler.executors.qt.QtJobExecutor .. autoclass:: apscheduler.executors.thread.ThreadPoolJobExecutor Data stores ----------- .. autoclass:: apscheduler.abc.DataStore .. autoclass:: apscheduler.datastores.memory.MemoryDataStore .. autoclass:: apscheduler.datastores.sqlalchemy.SQLAlchemyDataStore .. autoclass:: apscheduler.datastores.mongodb.MongoDBDataStore Event brokers ------------- .. autoclass:: apscheduler.abc.EventBroker .. autoclass:: apscheduler.abc.Subscription .. autoclass:: apscheduler.eventbrokers.local.LocalEventBroker .. autoclass:: apscheduler.eventbrokers.asyncpg.AsyncpgEventBroker .. autoclass:: apscheduler.eventbrokers.psycopg.PsycopgEventBroker .. autoclass:: apscheduler.eventbrokers.mqtt.MQTTEventBroker .. autoclass:: apscheduler.eventbrokers.redis.RedisEventBroker Serializers ----------- .. autoclass:: apscheduler.abc.Serializer .. autoclass:: apscheduler.serializers.cbor.CBORSerializer .. autoclass:: apscheduler.serializers.json.JSONSerializer .. autoclass:: apscheduler.serializers.pickle.PickleSerializer Triggers -------- .. autoclass:: apscheduler.abc.Trigger :special-members: __getstate__, __setstate__ .. autoclass:: apscheduler.triggers.date.DateTrigger .. autoclass:: apscheduler.triggers.interval.IntervalTrigger .. autoclass:: apscheduler.triggers.calendarinterval.CalendarIntervalTrigger .. autoclass:: apscheduler.triggers.combining.AndTrigger .. autoclass:: apscheduler.triggers.combining.OrTrigger .. autoclass:: apscheduler.triggers.cron.CronTrigger Events ------ .. autoclass:: apscheduler.Event .. autoclass:: apscheduler.DataStoreEvent .. autoclass:: apscheduler.TaskAdded .. autoclass:: apscheduler.TaskUpdated .. autoclass:: apscheduler.TaskRemoved .. autoclass:: apscheduler.ScheduleAdded .. autoclass:: apscheduler.ScheduleUpdated .. autoclass:: apscheduler.ScheduleRemoved .. autoclass:: apscheduler.JobAdded .. autoclass:: apscheduler.JobRemoved .. autoclass:: apscheduler.ScheduleDeserializationFailed .. autoclass:: apscheduler.JobDeserializationFailed .. autoclass:: apscheduler.SchedulerEvent .. autoclass:: apscheduler.SchedulerStarted .. autoclass:: apscheduler.SchedulerStopped .. autoclass:: apscheduler.JobAcquired .. autoclass:: apscheduler.JobReleased Enumerated types ---------------- .. autoclass:: apscheduler.SchedulerRole() :show-inheritance: .. autoclass:: apscheduler.RunState() :show-inheritance: .. autoclass:: apscheduler.JobOutcome() :show-inheritance: .. autoclass:: apscheduler.ConflictPolicy() :show-inheritance: .. autoclass:: apscheduler.CoalescePolicy() :show-inheritance: Context variables ----------------- See the :mod:`contextvars` module for information on how to work with context variables. .. data:: apscheduler.current_scheduler :type: ~contextvars.ContextVar[Scheduler] The current scheduler. .. data:: apscheduler.current_async_scheduler :type: ~contextvars.ContextVar[AsyncScheduler] The current asynchronous scheduler. .. data:: apscheduler.current_job :type: ~contextvars.ContextVar[Job] The job being currently run (available when running the job's target callable). Exceptions ---------- .. autoexception:: apscheduler.TaskLookupError .. autoexception:: apscheduler.ScheduleLookupError .. autoexception:: apscheduler.JobLookupError .. autoexception:: apscheduler.CallableLookupError .. autoexception:: apscheduler.JobResultNotReady .. autoexception:: apscheduler.JobCancelled .. autoexception:: apscheduler.JobDeadlineMissed .. autoexception:: apscheduler.ConflictingIdError .. autoexception:: apscheduler.SerializationError .. autoexception:: apscheduler.DeserializationError .. autoexception:: apscheduler.MaxIterationsReached Support classes for retrying failures ------------------------------------- .. autoclass:: apscheduler.RetrySettings .. autoclass:: apscheduler.RetryMixin Support classes for unset options --------------------------------- .. data:: apscheduler.unset Sentinel value for unset option values. .. autoclass:: apscheduler.UnsetValue --- # Source: https://github.com/agronholm/apscheduler/blob/master/docs/contributing.rst Contributing to APScheduler =========================== .. highlight:: bash If you wish to contribute a fix or feature to APScheduler, please follow the following guidelines. When you make a pull request against the main APScheduler codebase, Github runs the test suite against your modified code. Before making a pull request, you should ensure that the modified code passes tests and code quality checks locally. Running the test suite ---------------------- The test suite has dependencies on several external services, such as database servers. To make this easy for the developer, a `docker compose`_ configuration is provided. To use it, you need Docker_ (or a suitable replacement). On Linux, unless you're using Docker Desktop, you may need to also install the compose (v2) plugin (named ``docker-compose-plugin``, or similar) separately. Once you have the necessary tools installed, you can start the services with this command:: docker compose up -d You can run the test suite two ways: either with tox_, or by running pytest_ directly. To run tox_ against all supported (of those present on your system) Python versions:: tox Tox will handle the installation of dependencies in separate virtual environments. To pass arguments to the underlying pytest_ command, you can add them after ``--``, like this:: tox -- -k somekeyword To use pytest directly, you can set up a virtual environment and install the project in development mode along with its test dependencies (virtualenv activation demonstrated for Linux and macOS; on Windows you need ``venv\Scripts\activate`` instead):: python -m venv venv source venv/bin/activate pip install --group test -e . Now you can just run pytest_:: pytest Building the documentation -------------------------- To build the documentation, run ``tox -e docs``. This will place the documentation in ``build/sphinx/html`` where you can open ``index.html`` to view the formatted documentation. APScheduler uses ReadTheDocs_ to automatically build the documentation so the above procedure is only necessary if you are modifying the documentation and wish to check the results before committing. APScheduler uses pre-commit_ to perform several code style/quality checks. It is recommended to activate pre-commit_ on your local clone of the repository (using ``pre-commit install``) to ensure that your changes will pass the same checks on GitHub. Making a pull request on Github ------------------------------- To get your changes merged to the main codebase, you need a Github account. #. Fork the repository (if you don't have your own fork of it yet) by navigating to the `main APScheduler repository`_ and clicking on "Fork" near the top right corner. #. Clone the forked repository to your local machine with ``git clone git@github.com/yourusername/apscheduler``. #. Create a branch for your pull request, like ``git checkout -b myfixname`` #. Make the desired changes to the code base. #. Commit your changes locally. If your changes close an existing issue, add the text ``Fixes #XXX.`` or ``Closes #XXX.`` to the commit message (where XXX is the issue number). #. Push the changeset(s) to your forked repository (``git push``) #. Navigate to Pull requests page on the original repository (not your fork) and click "New pull request" #. Click on the text "compare across forks". #. Select your own fork as the head repository and then select the correct branch name. #. Click on "Create pull request". If you have trouble, consult the `pull request making guide`_ on opensource.com. .. _Docker: https://docs.docker.com/desktop/#download-and-install .. _docker compose: https://docs.docker.com/compose/ .. _tox: https://tox.readthedocs.io/en/latest/install.html .. _pre-commit: https://pre-commit.com/#installation .. _pytest: https://pypi.org/project/pytest/ .. _ReadTheDocs: https://readthedocs.org/ .. _main APScheduler repository: https://github.com/agronholm/apscheduler .. _pull request making guide: https://opensource.com/article/19/7/create-pull-request-github --- # Source: https://github.com/agronholm/apscheduler/blob/master/docs/extending.rst ##################### Extending APScheduler ##################### .. py:currentmodule:: apscheduler This document is meant to explain how to develop your custom triggers and data stores. Custom triggers --------------- The built-in triggers cover the needs of the majority of all users, particularly so when combined using :class:`~triggers.combining.AndTrigger` and :class:`~triggers.combining.OrTrigger`. However, some users may need specialized scheduling logic. This can be accomplished by creating your own custom trigger class. To implement your scheduling logic, create a new class that inherits from the :class:`~abc.Trigger` interface class:: from __future__ import annotations from apscheduler.abc import Trigger class MyCustomTrigger(Trigger): def next() -> datetime | None: ... # Your custom logic here def __getstate__(): ... # Return the serializable state here def __setstate__(state): ... # Restore the state from the return value of __getstate__() Requirements and constraints for trigger classes: * :meth:`~abc.Trigger.next` must always either return a timezone aware :class:`~datetime.datetime` object or :data:`None` if a new run time cannot be calculated * :meth:`~abc.Trigger.next` must never return the same :class:`~datetime.datetime` twice and never one that is earlier than the previously returned one * :meth:`~abc.Trigger.__setstate__` must accept the return value of :meth:`~abc.Trigger.__getstate__` and restore the trigger to the functionally same state as the original * :meth:`~abc.Trigger.__getstate__` may only return an object containing types serializable by :class:`~abc.Serializer` Triggers are stateful objects. The :meth:`~abc.Trigger.next` method is where you determine the next run time based on the current state of the trigger. The trigger's internal state needs to be updated before returning to ensure that the trigger won't return the same datetime on the next call. The trigger code does **not** need to be thread-safe. Custom job executors -------------------- .. py:currentmodule:: apscheduler If you need the ability to use third party frameworks or services to handle the actual execution of jobs, you will need a custom job executor. A job executor needs to inherit from :class:`~abc.JobExecutor`. This interface contains one abstract method you're required to implement: :meth:`~abc.JobExecutor.run_job`. This method is called with two arguments: #. ``func``: the callable you're supposed to call #. ``job``: the :class:`Job` instance The :meth:`~abc.JobExecutor.run_job` implementation needs to call ``func`` with the positional and keyword arguments attached to the job (``job.args`` and ``job.kwargs``, respectively). The return value of the callable must be returned from the method. Here's an example of a simple job executor that runs a (synchronous) callable in a thread:: from contextlib import AsyncExitStack from functools import partial from anyio import to_thread from apscheduler import Job from apscheduler.abc import JobExecutor class ThreadJobExecutor(JobExecutor): async def run_job(self, func: Callable[..., Any], job: Job) -> Any: wrapped = partial(func, *job.args, **job.kwargs) return await to_thread.run_sync(wrapped) If you need to initialize some underlying services, you can override the :meth:`~abc.JobExecutor.start` method. For example, the executor above could be improved to take a maximum number of threads and create an AnyIO :class:`~anyio.CapacityLimiter`:: from contextlib import AsyncExitStack from functools import partial from anyio import CapacityLimiter, to_thread from apscheduler import Job from apscheduler.abc import JobExecutor class ThreadJobExecutor(JobExecutor): _limiter: CapacityLimiter def __init__(self, max_threads: int): self.max_threads = max_threads async def start(self, exit_stack: AsyncExitStack) -> None: self._limiter = CapacityLimiter(self.max_workers) async def run_job(self, func: Callable[..., Any], job: Job) -> Any: wrapped = partial(func, *job.args, **job.kwargs) return await to_thread.run_sync(wrapped, limiter=self._limiter) Custom data stores ------------------ If you want to make use of some external service to store the scheduler data, and it's not covered by a built-in data store implementation, you may want to create a custom data store class. A data store implementation needs to inherit from :class:`~abc.DataStore` and implement several abstract methods: * :meth:`~abc.DataStore.start` * :meth:`~abc.DataStore.add_task` * :meth:`~abc.DataStore.remove_task` * :meth:`~abc.DataStore.get_task` * :meth:`~abc.DataStore.get_tasks` * :meth:`~abc.DataStore.add_schedule` * :meth:`~abc.DataStore.remove_schedules` * :meth:`~abc.DataStore.get_schedules` * :meth:`~abc.DataStore.acquire_schedules` * :meth:`~abc.DataStore.release_schedules` * :meth:`~abc.DataStore.get_next_schedule_run_time` * :meth:`~abc.DataStore.add_job` * :meth:`~abc.DataStore.get_jobs` * :meth:`~abc.DataStore.acquire_jobs` * :meth:`~abc.DataStore.release_job` * :meth:`~abc.DataStore.get_job_result` * :meth:`~abc.DataStore.extend_acquired_schedule_leases` * :meth:`~abc.DataStore.extend_acquired_job_leases` * :meth:`~abc.DataStore.cleanup` The :meth:`~abc.DataStore.start` method is where your implementation can perform any initialization, including starting any background tasks. This method is called with two arguments: #. ``exit_stack``: an :class:`~contextlib.AsyncExitStack` object that can be used to work with context managers #. ``event_broker``: the event broker that the store should be using to send events to other components of the system (including other schedulers) The data store class needs to inherit from :class:`~abc.DataStore`:: from contextlib import AsyncExitStack from apscheduler.abc import DataStore, EventBroker class MyCustomDataStore(DataStore): _event_broker: EventBroker async def start(self, exit_stack: AsyncExitStack, event_broker: EventBroker) -> None: # Save the event broker in a member attribute and initialize the store self._event_broker = event_broker # See the interface class for the rest of the abstract methods Handling temporary failures +++++++++++++++++++++++++++ If you plan to make your data store implementation public, it is strongly recommended that you make an effort to ensure that the implementation can tolerate the loss of connectivity to the backing store. The Tenacity_ library is used for this purpose by the built-in stores to retry operations in case of a disconnection. If you use it to retry operations when exceptions are raised, it is important to only do that in cases of *temporary* errors, like connectivity loss, and not in cases like authentication failure, missing database and so forth. See the built-in data store implementations and Tenacity_ documentation for more information on how to pick the exceptions on which to retry the operations. .. _Tenacity: https://pypi.org/project/tenacity/ --- # Source: https://github.com/agronholm/apscheduler/blob/master/docs/faq.rst ########################## Frequently Asked Questions ########################## Is there a graphical user interface for APScheduler? ==================================================== No graphical interface is provided by the library itself. However, there are some third party implementations, but APScheduler developers are not responsible for them. Here is a potentially incomplete list: * django_apscheduler_ * apschedulerweb_ * `Nextdoor scheduler`_ .. warning:: As of this writing, these third party offerings have not been updated to work with APScheduler 4. .. _django_apscheduler: https://pypi.org/project/django-apscheduler/ .. _Flask-APScheduler: https://pypi.org/project/flask-apscheduler/ .. _aiohttp: https://pypi.org/project/aiohttp/ .. _apschedulerweb: https://github.com/marwinxxii/apschedulerweb .. _Nextdoor scheduler: https://github.com/Nextdoor/ndscheduler --- # Source: https://github.com/agronholm/apscheduler/blob/master/docs/index.rst Advanced Python Scheduler ========================= .. include:: ../README.rst :end-before: Documentation Table of Contents ================= .. toctree:: :maxdepth: 1 userguide integrations versionhistory migration contributing extending faq api --- # Source: https://github.com/agronholm/apscheduler/blob/master/docs/integrations.rst Integrating with application frameworks ======================================= .. py:currentmodule:: apscheduler WSGI ---- To integrate APScheduler with web frameworks using WSGI_ (Web Server Gateway Interface), you need to use the synchronous scheduler and start it as a side effect of importing the module that contains your application instance:: from apscheduler import Scheduler def app(environ, start_response): """Trivial example of a WSGI application.""" response_body = b"Hello, World!" response_headers = [ ("Content-Type", "text/plain"), ("Content-Length", str(len(response_body))), ] start_response(200, response_headers) return [response_body] scheduler = Scheduler() scheduler.start_in_background() Assuming you saved this as ``example.py``, you can now start the application with uWSGI_ with: .. code-block:: bash uwsgi --enable-threads --http :8080 --wsgi-file example.py The ``--enable-threads`` (or ``-T``) option is necessary because uWSGI disables threads by default which then prevents the scheduler from working. See the `uWSGI documentation `_ for more details. .. note:: The :meth:`Scheduler.start_in_background` method installs an :mod:`atexit` hook that shuts down the scheduler gracefully when the worker process exits. .. _WSGI: https://wsgi.readthedocs.io/en/latest/what.html .. _uWSGI: https://www.fullstackpython.com/uwsgi.html .. _uWSGI-threads: https://uwsgi-docs.readthedocs.io/en/latest/WSGIquickstart.html#a-note-on-python-threads ASGI ---- To integrate APScheduler with web frameworks using ASGI_ (Asynchronous Server Gateway Interface), you need to use the asynchronous scheduler and tie its lifespan to the lifespan of the application by wrapping it in middleware, as follows:: from apscheduler import AsyncScheduler async def app(scope, receive, send): """Trivial example of an ASGI application.""" if scope["type"] == "http": await receive() await send( { "type": "http.response.start", "status": 200, "headers": [ [b"content-type", b"text/plain"], ], } ) await send( { "type": "http.response.body", "body": b"Hello, world!", "more_body": False, } ) elif scope["type"] == "lifespan": while True: message = await receive() if message["type"] == "lifespan.startup": await send({"type": "lifespan.startup.complete"}) elif message["type"] == "lifespan.shutdown": await send({"type": "lifespan.shutdown.complete"}) return async def scheduler_middleware(scope, receive, send): if scope['type'] == 'lifespan': async with AsyncScheduler() as scheduler: await app(scope, receive, send) else: await app(scope, receive, send) Assuming you saved this as ``example.py``, you can then run this with Hypercorn_: .. code-block:: bash hypercorn example:scheduler_middleware or with Uvicorn_: .. code-block:: bash uvicorn example:scheduler_middleware .. _ASGI: https://asgi.readthedocs.io/en/latest/index.html .. _Hypercorn: https://gitlab.com/pgjones/hypercorn/ .. _Uvicorn: https://www.uvicorn.org/ --- # Source: https://github.com/agronholm/apscheduler/blob/master/docs/migration.rst ############################################### Migrating from previous versions of APScheduler ############################################### .. py:currentmodule:: apscheduler From v3.x to v4.0 ================= APScheduler 4.0 has undergone a partial rewrite since the 3.x series. There is currently no way to automatically import schedules from a persistent 3.x job store, but this shortcoming will be rectified before the final v4.0 release. Terminology and architectural design changes -------------------------------------------- The concept of a *job* has been split into :class:`Task`, :class:`Schedule` and :class:`Job`. See the documentation of each class (and read the tutorial) to understand their roles. **Data stores**, previously called *job stores*, have been redesigned to work with multiple running schedulers and workers, both for purposes of scalability and fault tolerance. Many data store implementations were dropped because they were either too burdensome to support, or the backing services were not sophisticated enough to handle the increased requirements. **Event brokers** are a new component in v4.0. They relay events between schedulers and workers, enabling them to work together with a shared data store. External (as opposed to local) event broker services are required in multi-node or multi-process deployment scenarios. **Triggers** are now stateful. This change was found to be necessary to properly support combining triggers (:class:`~.triggers.combining.AndTrigger` and :class:`~.triggers.combining.OrTrigger`), as they needed to keep track of the next run times of all the triggers contained within. This change also enables some more sophisticated custom trigger implementations. **Time zone** support has been revamped to use :mod:`zoneinfo` (or `backports.zoneinfo`_ on Python versions earlier than 3.9) zones instead of pytz zones. You should not use pytz with APScheduler anymore. `Entry points`_ are no longer used or supported, as they were more trouble than they were worth, particularly with packagers like py2exe or PyInstaller which by default did not package distribution metadata. Thus, triggers and data stores have to be explicitly instantiated. .. _backports.zoneinfo: https://pypi.org/project/backports.zoneinfo/ .. _Entry points: https://packaging.python.org/en/latest/specifications/entry-points/ Scheduler changes ----------------- The ``add_job()`` method is now :meth:`~Scheduler.add_schedule`. The scheduler still has a method named :meth:`~Scheduler.add_job`, but this is meant for making one-off runs of a task. Previously you would have had to call ``add_job()`` with a :class:`~triggers.date.DateTrigger` using the current time as the run time. The two most commonly used schedulers, ``BlockingScheduler`` and ``BackgroundScheduler``, have often caused confusion among users and have thus been combined into :class:`~Scheduler`. This new unified scheduler class has two methods that replace the ``start()`` method used previously: :meth:`~Scheduler.run_until_stopped` and :meth:`~Scheduler.start_in_background`. The former should be used if you previously used ``BlockingScheduler``, and the latter if you used ``BackgroundScheduler``. The asyncio scheduler has been replaced with a more generic :class:`AsyncScheduler`, which is based on AnyIO_ and thus also supports Trio_ in addition to :mod:`asyncio`. The API of the async scheduler differs somewhat from its synchronous counterpart. In particular, it **requires** itself to be used as an async context manager – whereas with the synchronous scheduler, use as a context manager is recommended but not required. All other scheduler implementations have been dropped because they were either too burdensome to support, or did not seem necessary anymore. Some of the dropped implementations (particularly Qt) are likely to be re-added before v4.0 final. Schedulers no longer support multiple data stores. If you need this capability, you should run multiple schedulers instead. Configuring and running the scheduler has been radically simplified. The ``configure()`` method is gone, and all configuration is now passed as keyword arguments to the scheduler class. .. _AnyIO: https://pypi.org/project/anyio/ .. _Trio: https://pypi.org/project/trio/ Trigger changes --------------- As the scheduler is no longer used to create triggers, any supplied datetimes will be assumed to be in the local time zone. If you wish to change the local time zone, you should set the ``TZ`` environment variable to either the name of the desired timezone (e.g. ``Europe/Helsinki``) or to a path of a time zone file. See the tzlocal_ documentation for more information. **Jitter** support has been moved from individual triggers to the schedule level. This not only simplified trigger design, but also enabled the scheduler to provide information about the randomized jitter and the original run time to the user. :class:`~triggers.cron.CronTrigger` was changed to respect the standard order of weekdays, so that Sunday is now 0 and Saturday is 6. If you used numbered weekdays before, you must change your trigger configuration to match. If in doubt, use abbreviated weekday names (e.g. ``sun``, ``fri``) instead. :class:`~triggers.interval.IntervalTrigger` was changed to start immediately, instead of waiting for the first interval to pass. If you have workarounds in place to "fix" the previous behavior, you should remove them. .. _tzlocal: https://pypi.org/project/tzlocal/ From v3.0 to v3.2 ================= Prior to v3.1, the scheduler inadvertently exposed the ability to fetch and manipulate jobs before the scheduler had been started. The scheduler now requires you to call ``scheduler.start()`` before attempting to access any of the jobs in the job stores. To ensure that no old jobs are mistakenly executed, you can start the scheduler in paused mode (``scheduler.start(paused=True)``) (introduced in v3.2) to avoid any premature job processing. From v2.x to v3.0 ================= The 3.0 series is API incompatible with previous releases due to a design overhaul. Scheduler changes ----------------- * The concept of "standalone mode" is gone. For ``standalone=True``, use ``BlockingScheduler`` instead, and for ``standalone=False``, use ``BackgroundScheduler``. BackgroundScheduler matches the old default semantics. * Job defaults (like ``misfire_grace_time`` and ``coalesce``) must now be passed in a dictionary as the ``job_defaults`` option to ``BaseScheduler.configure()``. When supplying an ini-style configuration as the first argument, they will need a corresponding ``job_defaults.`` prefix. * The configuration key prefix for job stores was changed from ``jobstore.`` to ``jobstores.`` to match the dict-style configuration better. * The ``max_runs`` option has been dropped since the run counter could not be reliably preserved when replacing a job with another one with the same ID. To make up for this, the ``end_date`` option was added to cron and interval triggers. * The old thread pool is gone, replaced by ``ThreadPoolExecutor``. This means that the old ``threadpool`` options are no longer valid. * The trigger-specific scheduling methods have been removed entirely from the scheduler. Use the generic ``BaseScheduler.add_job()`` method or the ``@BaseScheduler.scheduled_job`` decorator instead. The signatures of these methods were changed significantly. * The ``shutdown_threadpool`` and ``close_jobstores`` options have been removed from the ``BaseScheduler.shutdown()`` method. Executors and job stores are now always shut down on scheduler shutdown. * ``Scheduler.unschedule_job()`` and ``Scheduler.unschedule_func()`` have been replaced by ``BaseScheduler.remove_job()``. You can also unschedule a job by using the job handle returned from ``BaseScheduler.add_job()``. Job store changes ----------------- The job store system was completely overhauled for both efficiency and forwards compatibility. Unfortunately, this means that the old data is not compatible with the new job stores. If you need to migrate existing data from APScheduler 2.x to 3.x, contact the APScheduler author. The Shelve job store had to be dropped because it could not support the new job store design. Use SQLAlchemyJobStore with SQLite instead. Trigger changes --------------- From 3.0 onwards, triggers now require a pytz timezone. This is normally provided by the scheduler, but if you were instantiating triggers manually before, then one must be supplied as the ``timezone`` argument. The only other backwards incompatible change was that ``get_next_fire_time()`` takes two arguments now: the previous fire time and the current datetime. From v1.x to 2.0 ================ There have been some API changes since the 1.x series. This document explains the changes made to v2.0 that are incompatible with the v1.x API. API changes ----------- * The behavior of cron scheduling with regards to default values for omitted fields has been made more intuitive -- omitted fields lower than the least significant explicitly defined field will default to their minimum values except for the week number and weekday fields * SchedulerShutdownError has been removed -- jobs are now added tentatively and scheduled for real when/if the scheduler is restarted * Scheduler.is_job_active() has been removed -- use ``job in scheduler.get_jobs()`` instead * dump_jobs() is now print_jobs() and prints directly to the given file or sys.stdout if none is given * The ``repeat`` parameter was removed from ``Scheduler.add_interval_job()`` and ``@Scheduler.interval_schedule`` in favor of the universal ``max_runs`` option * ``Scheduler.unschedule_func()`` now raises a :exc:`KeyError` if the given function is not scheduled * The semantics of ``Scheduler.shutdown()`` have changed – the method no longer accepts a numeric argument, but two booleans Configuration changes --------------------- * The scheduler can no longer be reconfigured while it's running --- # Source: https://github.com/agronholm/apscheduler/blob/master/docs/userguide.rst ########## User guide ########## .. py:currentmodule:: apscheduler Installation ============ The preferred installation method is by using `pip `_:: $ pip install apscheduler If you don't have pip installed, you need to `install that first `_. Interfacing with certain external services may need extra dependencies which are installable as extras: * ``asyncpg``: for the AsyncPG event broker * ``cbor``: for the CBOR serializer * ``mongodb``: for the MongoDB data store * ``mqtt``: for the MQTT event broker * ``psycopg``: for the Psycopg event broker * ``redis``: for the Redis event broker * ``sqlalchemy``: for the SQLAlchemy data store Using the extras instead of adding the corresponding libraries separately helps ensure that you will have compatible versions of the dependent libraries going forward. You can install any number of these extras with APScheduler by providing them as a comma separated list inside the brackets, like this:: pip install apscheduler[psycopg,sqlalchemy] Code examples ============= The source distribution contains the :file:`examples` directory where you can find many working examples for using APScheduler in different ways. The examples can also be `browsed online `_. Introduction ============ The core concept of APScheduler is to give the user the ability to queue Python code to be executed, either as soon as possible, later at a given time, or on a recurring schedule. The *scheduler* is the user-facing interface of the system. When it's running, it does two things concurrently. The first is processing *schedules*. From its *data store*, it fetches :ref:`schedules ` due to be run. For each such schedule, it then uses the schedule's trigger_ to calculate run times up to the present. The scheduler then creates one or more jobs (controllable by configuration) based on these run times and adds them to the data store. The second role of the scheduler is running :ref:`jobs `. The scheduler asks the `data store`_ for jobs, and then starts running those jobs. If the data store signals that it has new jobs, the scheduler will try to acquire those jobs if it is capable of accommodating more. When a scheduler completes a job, it will then also ask the data store for as many more jobs as it can handle. By default, schedulers operate in both of these roles, but can be configured to only process schedules or run jobs if deemed necessary. It may even be desirable to use the scheduler only as an interface to an external data store while leaving schedule and job processing to other scheduler instances running elsewhere. Basic concepts / glossary ========================= These are the basic components and concepts of APScheduler which will be referenced later in this guide. .. _callable: A *callable* is any object that returns ``True`` from :func:`callable`. These are: * A free function (``def something(...): ...``) * An instance method (``class Foo: ... def something(self, ...): ...``) * A class method (``class Foo: ... @classmethod ... def something(cls, ...): ...``) * A static method (``class Foo: ... @staticmethod ... def something(...): ...``) * A lambda (``lambda a, b: a + b``) * An instance of a class that contains a method named ``__call__``) .. _task: A *task* encapsulates a callable_ and a number of configuration parameters. They are often implicitly defined as a side effect of the user creating a new schedule against a callable_, but can also be :ref:`explicitly defined beforehand `. Tasks have three different roles: #. They provide the target callable to be run when a job is started #. They provide a key (task ID) on which to limit the maximum number of concurrent jobs, even between different schedules #. They provide a template from which certain parameters, like job executor and misfire grace time, are copied to schedules and jobs derived from the task .. _trigger: A trigger_ contains the logic and state used to calculate when a scheduled task_ should be run. .. _schedule: A *schedule* combines a task_ with a trigger_, plus a number of configuration parameters. .. _job: A *job* is request for a task_ to be run. It can be created automatically from a schedule when a scheduler processes it, or it can be directly created by the user if they directly request a task_ to be run. .. _data store: A *data store* is used to store :ref:`schedules ` and :ref:`jobs `, and to keep track of :ref:`tasks `. .. _job executor: A *job executor* runs the job_, by calling the function associated with the job's task. An executor could directly call the callable_, or do it in another thread, subprocess or even some external service. .. _event broker: An *event broker* delivers published events to all interested parties. It facilitates the cooperation between schedulers by notifying them of new or updated :ref:`schedules ` and :ref:`jobs `. .. _scheduler: A *scheduler* is the main interface of this library. It houses both a `data store`_ and an `event broker`_, plus one or more :ref:`job executors `. It contains methods users can use to work with tasks, schedules and jobs. Behind the scenes, it also processes due schedules, spawning jobs and updating the next run times. It also processes available jobs, making the appropriate :ref:`job executors ` to run them, and then sending back the results to the `data store`_. Running the scheduler ===================== The scheduler_ comes in two flavors: synchronous and asynchronous. The synchronous scheduler actually runs an asynchronous scheduler behind the scenes in a dedicated thread, so if your app runs on :mod:`asyncio` or Trio_, you should prefer the asynchronous scheduler. The scheduler can run either in the foreground, blocking on a call to :meth:`~Scheduler.run_until_stopped`, or in the background where it does its work while letting the rest of the program run. If the only intent of your program is to run scheduled tasks, then you should start the scheduler with :meth:`~Scheduler.run_until_stopped`. But if you need to do other things too, then you should call :meth:`~Scheduler.start_in_background` before running the rest of the program. In almost all cases, the scheduler should be used as a context manager. This initializes the underlying `data store`_ and `event broker`_, allowing you to use the scheduler for manipulating :ref:`tasks `, :ref:`schedules ` and jobs prior to starting the processing of schedules and jobs. Exiting the context manager will shut down the scheduler and its underlying services. This mode of operation is mandatory for the asynchronous scheduler when running it in the background, but it is preferred for the synchronous scheduler too. As a special consideration (for use with WSGI_ based web frameworks), the synchronous scheduler can be run in the background without being used as a context manager. In this scenario, the scheduler adds an :mod:`atexit` hook that will perform an orderly shutdown of the scheduler before the process terminates. .. _WSGI: https://wsgi.readthedocs.io/en/latest/what.html .. _Trio: https://trio.readthedocs.io/en/stable/ .. warning:: If you start the scheduler in the background and let the script finish execution, the scheduler will automatically shut down as well. .. tabs:: .. code-tab:: python Synchronous (run in foreground) from apscheduler import Scheduler with Scheduler() as scheduler: # Add schedules, configure tasks here scheduler.run_until_stopped() .. code-tab:: python Synchronous (background thread; preferred method) from apscheduler import Scheduler with Scheduler() as scheduler: # Add schedules, configure tasks here scheduler.start_in_background() .. code-tab:: python Synchronous (background thread; WSGI alternative) from apscheduler import Scheduler scheduler = Scheduler() # Add schedules, configure tasks here scheduler.start_in_background() .. code-tab:: python Asynchronous (run in foreground) import asyncio from apscheduler import AsyncScheduler async def main(): async with AsyncScheduler() as scheduler: # Add schedules, configure tasks here await scheduler.run_until_stopped() asyncio.run(main()) .. code-tab:: python Asynchronous (background task) import asyncio from apscheduler import AsyncScheduler async def main(): async with AsyncScheduler() as scheduler: # Add schedules, configure tasks here await scheduler.start_in_background() asyncio.run(main()) .. _configuring-tasks: Configuring tasks ================= In order to add :ref:`schedules ` or :ref:`jobs ` to the `data store`_, you need to have a task_ that defines which callable_ will be called when each job_ is run. In most cases, you don't need to go through this step, and instead have a task_ implicitly created for you by the methods that add schedules or jobs. Explicitly configuring a task is generally only necessary in the following cases: * You need to have more than one task with the same callable * You need to set any of the task settings to non-default values * You need to add schedules/jobs targeting lambdas, nested functions or instances of unserializable classes There are two ways to explicitly configure tasks: #. Call the :meth:`~Scheduler.configure_task` scheduler method #. Decorate your target function with :func:`@task ` .. seealso:: :ref:`settings_inheritance` Limiting the number of concurrently executing instances of a job ---------------------------------------------------------------- **Option**: ``max_running_jobs`` It is possible to control the maximum number of concurrently running jobs for a particular task. By default, only one job is allowed to be run for every task. This means that if the job is about to be run but there is another job for the same task still running, the later job is terminated with the outcome of :attr:`~JobOutcome.missed_start_deadline`. To allow more jobs to be concurrently running for a task, pass the desired maximum number as the ``max_running_jobs`` keyword argument to :meth:`~Scheduler.add_schedule`. .. _controlling-how-much-a-job-can-be-started-late: Controlling how much a job can be started late ---------------------------------------------- **Option**: ``misfire_grace_time`` This option applies to scheduled jobs. Some tasks are time sensitive, and should not be run at all if they fail to be started on time (like, for example, if the scheduler(s) were down while they were supposed to be running the scheduled jobs). When a scheduler acquires jobs, the data store discards any jobs that have passed their start deadlines (scheduled time + ``misfire_grace_time``). Such jobs are released with the outcome of :attr:`~JobOutcome.missed_start_deadline`. Adding custom metadata ---------------------- **Option**: ``metadata`` This option allows adding custom, JSON compatible metadata to tasks, schedules and jobs. Here, "JSON compatible" means the following restrictions: * The top-level metadata object must be a :class:`dict` * All :class:`dict` keys must be strings * Values can be :class:`int`, :class:`float`, :class:`str`, :class:`bool` or :data:`None` .. note:: Top level metadata keys are merged with any explicitly passed values, in such a way that explicitly passed values override any values from the task level. .. _settings_inheritance: Inheritance of settings ----------------------- When tasks are configured, or schedules or jobs created, they will inherit the settings of any "parent" object according to the following rules: * Task configuration parameters are resolving according to the following, descending priority order: #. Parameters passed directly to :meth:`~AsyncScheduler.configure_task` #. Parameters bound to the target function via :func:`@task ` #. The scheduler's task defaults * Schedules inherit settings from the their respective tasks * Jobs created from schedules inherit the settings from their parent schedules * Jobs created directly inherit the settings from their parent tasks The ``metadata`` parameter works a bit differently. Top level keys will be merged in such a way that keys on a more explicit configuration level keys will overwrite keys from a more generic level. If any parameter is unset, it will be looked up on the next level. Here is an example that illustrates the lookup order:: from apscheduler import Scheduler, TaskDefaults, task @task(max_running_jobs=3, metadata={"foo": ["taskfunc"]}) def mytaskfunc(): print("running stuff") task_defaults = TaskDefaults( misfire_grace_time=15, job_executor="processpool", metadata={"global": 3, "foo": ["bar"]} ) with Scheduler(task_defaults=task_defaults) as scheduler: scheduler.configure_task( "sometask", func=mytaskfunc, job_executor="threadpool", metadata={"direct": True} ) The resulting task will have the following parameters: * ``id``: ``'sometask'`` (from the :meth:`~AsyncScheduler.configure_task` call) * ``job_executor``: ``'threadpool'`` (from the :meth:`~AsyncScheduler.configure_task` call, where it overrides the scheduler-level default) * ``max_running_jobs``: 3 (from the decorator) * ``misfire_grace_time``: 15 (from the scheduler-level default) * ``metadata``: ``{"global": 3, "foo": ["taskfunc"], "direct": True}`` Scheduling tasks ================ To create a schedule for running a task, you need, at the minimum: * A preconfigured task_, OR a callable_ to be run * A trigger_ If you've configured a task (as per the previous section), you can pass the task object or its ID to :meth:`Scheduler.add_schedule`. As a shortcut, you can pass a callable_ instead, in which case a task will be automatically created for you if necessary. If the callable you're trying to schedule is either a lambda or a nested function, then you need to explicitly create a task beforehand, as it is not possible to create a reference (``package.module:varname``) to these types of callables. The trigger determines the scheduling logic for your schedule. In other words, it is used to calculate the datetimes on which the task will be run. APScheduler comes with a number of built-in trigger classes: * :class:`~triggers.date.DateTrigger`: use when you want to run the task just once at a certain point of time * :class:`~triggers.interval.IntervalTrigger`: use when you want to run the task at fixed intervals of time * :class:`~triggers.cron.CronTrigger`: use when you want to run the task periodically at certain time(s) of day * :class:`~triggers.calendarinterval.CalendarIntervalTrigger`: use when you want to run the task on calendar-based intervals, at a specific time of day Combining multiple triggers --------------------------- Occasionally, you may find yourself in a situation where your scheduling needs are too complex to be handled with any of the built-in triggers directly. One examples of such a need would be when you want the task to run at 10:00 from Monday to Friday, but also at 11:00 from Saturday to Sunday. A single :class:`~triggers.cron.CronTrigger` would not be able to handle this case, but an :class:`~triggers.combining.OrTrigger` containing two cron triggers can:: from apscheduler.triggers.combining import OrTrigger from apscheduler.triggers.cron import CronTrigger trigger = OrTrigger( CronTrigger(day_of_week="mon-fri", hour=10), CronTrigger(day_of_week="sat-sun", hour=11), ) On the first run, :class:`~triggers.combining.OrTrigger` generates the next run times from both cron triggers and saves them internally. It then returns the earliest one. On the next run, it generates a new run time from the trigger that produced the earliest run time on the previous run, and then again returns the earliest of the two run times. This goes on until all the triggers have been exhausted, if ever. Another example would be a case where you want the task to be run every 2 months at 10:00, but not on weekends (Saturday or Sunday):: from apscheduler.triggers.calendarinterval import CalendarIntervalTrigger from apscheduler.triggers.combining import AndTrigger from apscheduler.triggers.cron import CronTrigger trigger = AndTrigger( CalendarIntervalTrigger(months=2, hour=10), CronTrigger(day_of_week="mon-fri", hour=10), ) On the first run, :class:`~triggers.combining.AndTrigger` generates the next run times from both the :class:`~triggers.calendarinterval.CalendarIntervalTrigger` and :class:`~triggers.cron.CronTrigger`. If the run times coincide, it will return that run time. Otherwise, it will calculate a new run time from the trigger that produced the earliest run time. It will keep doing this until a match is found, one of the triggers has been exhausted or the maximum number of iterations (1000 by default) is reached. If this trigger is created on 2022-06-07 at 09:00:00, its first run times would be: * 2022-06-07 10:00:00 * 2022-10-07 10:00:00 * 2022-12-07 10:00:00 Notably, 2022-08-07 is skipped because it falls on a Sunday. Removing schedules ------------------ To remove a previously added schedule, call :meth:`~Scheduler.remove_schedule`. Pass the identifier of the schedule you want to remove as an argument. This is the ID you got from :meth:`~Scheduler.add_schedule`. Note that removing a schedule does not cancel any jobs derived from it, but does prevent further jobs from being created from that schedule. Pausing schedules ----------------- To pause a schedule, call :meth:`~Scheduler.pause_schedule`. Pass the identifier of the schedule you want to pause as an argument. This is the ID you got from :meth:`~Scheduler.add_schedule`. Pausing a schedule prevents any new jobs from being created from it, but does not cancel any jobs that have already been created from that schedule. The schedule can be unpaused by calling :meth:`~Scheduler.unpause_schedule` with the identifier of the schedule you want to unpause. By default the schedule will retain the next fire time it had when it was paused, which may result in the schedule being considered to have misfired when it is unpaused, resulting in whatever misfire behavior it has configured (see :ref:`controlling-how-much-a-job-can-be-started-late` for more details). The ``resume_from`` parameter can be used to specify the time from which the schedule should be resumed. This can be used to avoid the misfire behavior mentioned above. It can be either a datetime object, or the string ``"now"`` as a convenient shorthand for the current datetime. If this parameter is provided, the schedules trigger will be repeatedly advanced to determine a next fire time that is at or after the specified time to resume from. Controlling how jobs are queued from schedules ---------------------------------------------- In most cases, when a scheduler processes a schedule, it queues a new job using the run time currently marked for the schedule. Then it updates the next run time using the schedule's trigger and releases the schedule back to the data store. But sometimes a situation occurs where the schedule did not get processed often or quickly enough, and one or more next run times produced by the trigger are actually in the past. In a situation like that, the scheduler needs to decide what to do: to queue a job for every run time produced, or to *coalesce* them all into a single job, effectively just kicking off a single job. To control this, pass the ``coalesce`` argument to :meth:`~Scheduler.add_schedule`. The possible values are: * :data:`~CoalescePolicy.latest`: queue exactly one job, using the **latest** run time as the designated run time * :data:`~CoalescePolicy.earliest`: queue exactly one job, using the **earliest** run time as the designated run time * :data:`~CoalescePolicy.all`: queue one job for **each** of the calculated run times The biggest difference between the first two options is how the designated run time, and by extension, the starting deadline for the job is selected. With the first option, the job is less likely to be skipped due to being started late since the latest of all the collected run times is used for the deadline calculation. As explained in the previous section, the starting deadline is *misfire grace time* affects the newly queued job. Running tasks without scheduling ================================ In some cases, you want to run tasks directly, without involving schedules: * You're only interested in using the scheduler system as a job queue * You're interested in the job's return value To queue a job and wait for its completion and get the result, the easiest way is to use :meth:`~Scheduler.run_job`. If you prefer to just launch a job and not wait for its result, use :meth:`~Scheduler.add_job` instead. If you want to get the results later, you need to pass an appropriate ``result_expiration_time`` parameter to :meth:`~Scheduler.add_job` so that the result is saved. Then, you can call :meth:`~Scheduler.get_job_result` with the job ID you got from :meth:`~Scheduler.add_job` to retrieve the result. Context variables ================= Schedulers provide certain `context variables`_ available to the tasks being run: * The current (synchronous) scheduler: :data:`~current_scheduler` * The current asynchronous scheduler: :data:`~current_async_scheduler` * Information about the job being currently run: :data:`~current_job` Here's an example:: from apscheduler import current_job def my_task_function(): job_info = current_job.get().id print( f"This is job {job_info.id} and was spawned from schedule " f"{job_info.schedule_id}" ) .. _context variables: :mod:`contextvars` .. _scheduler-events: Subscribing to events ===================== Schedulers have the ability to notify listeners when some event occurs in the scheduler system. Examples of such events would be schedulers or workers starting up or shutting down, or schedules or jobs being created or removed from the data store. To listen to events, you need a callable_ that takes a single positional argument which is the event object. Then, you need to decide which events you're interested in: .. tabs:: .. code-tab:: python Synchronous from apscheduler import Event, JobAcquired, JobReleased def listener(event: Event) -> None: print(f"Received {event.__class__.__name__}") scheduler.subscribe(listener, {JobAcquired, JobReleased}) .. code-tab:: python Asynchronous from apscheduler import Event, JobAcquired, JobReleased async def listener(event: Event) -> None: print(f"Received {event.__class__.__name__}") scheduler.subscribe(listener, {JobAcquired, JobReleased}) This example subscribes to the :class:`~JobAcquired` and :class:`~JobReleased` event types. The callback will receive an event of either type, and prints the name of the class of the received event. Asynchronous schedulers and workers support both synchronous and asynchronous callbacks, but their synchronous counterparts only support synchronous callbacks. When **distributed** event brokers (that is, other than the default one) are being used, events other than the ones relating to the life cycles of schedulers and workers, will be sent to all schedulers and workers connected to that event broker. Clean-up of expired jobs, job results and schedules =================================================== Each scheduler runs the data store's :meth:`~.abc.DataStore.cleanup` method periodically, configurable via the ``cleanup_interval`` scheduler parameter. This ensures that the data store doesn't get filled with unused data over time. Deployment ========== Using persistent data stores ---------------------------- The default data store, :class:`~datastores.memory.MemoryDataStore`, stores data only in memory so all the schedules and jobs that were added to it will be erased if the process crashes. When you need your schedules and jobs to survive the application shutting down, you need to use a *persistent data store*. Such data stores do have additional considerations, compared to the memory data store: * Task arguments must be *serializable* * You must either trust the data store, or use an alternate *serializer* * A *conflict policy* and an *explicit identifier* must be defined for schedules that are added at application startup These requirements warrant some explanation. The first point means that since persisting data means saving it externally, either in a file or sending to a database server, all the objects involved are converted to bytestrings. This process is called *serialization*. By default, this is done using :mod:`pickle`, which guarantees the best compatibility but is notorious for being vulnerable to simple injection attacks. This brings us to the second point. If you cannot be sure that nobody can maliciously alter the externally stored serialized data, it would be best to use another serializer. The built-in alternatives are: * :class:`~serializers.cbor.CBORSerializer` * :class:`~serializers.json.JSONSerializer` The former requires the cbor2_ library, but supports a wider variety of types natively. The latter has no dependencies but has very limited support for different types. The third point relates to situations where you're essentially adding the same schedule to the data store over and over again. If you don't specify a static identifier for the schedules added at the start of the application, you will end up with an increasing number of redundant schedules doing the same thing, which is probably not what you want. To that end, you will need to come up with some identifying name which will ensure that the same schedule will not be added over and over again (as data stores are required to enforce the uniqueness of schedule identifiers). You'll also need to decide what to do if the schedule already exists in the data store (that is, when the application is started the second time) by passing the ``conflict_policy`` argument. Usually you want the :data:`~ConflictPolicy.replace` option, which replaces the existing schedule with the new one. .. seealso:: You can find practical examples of persistent data stores in the :file:`examples/standalone` directory (``async_postgres.py`` and ``async_mysql.py``). .. _cbor2: https://pypi.org/project/cbor2/ Using multiple schedulers ------------------------- There are several situations in which you would want to run several schedulers against the same data store at once: * Running a server application (usually a web app) with multiple worker processes * You need fault tolerance (scheduling will continue even if a node or process running a scheduler goes down) When you have multiple schedulers running at once, they need to be able to coordinate their efforts so that the schedules don't get processed more than once and the schedulers know when to wake up even if another scheduler added the next due schedule to the data store. To this end, a shared *event broker* must be configured. .. seealso:: You can find practical examples of data store sharing in the :file:`examples/web` directory. Using a scheduler without running it ------------------------------------ Some deployment scenarios may warrant the use of a scheduler for only interfacing with an external data store, for things like configuring tasks, adding schedules or queuing jobs. One such practical use case is a web application that needs to run heavy computations elsewhere so they don't cause performance issues with the web application itself. You can then run one or more schedulers against the same data store and event broker elsewhere where they don't disturb the web application. These schedulers will do all the heavy lifting like processing schedules and running jobs. .. seealso:: A practical example of this separation of concerns can be found in the :file:`examples/separate_worker` directory. Explicitly assigning an identity to the scheduler ------------------------------------------------- If you're running one or more schedulers against a persistent data store in a production setting, it'd be wise to assign each scheduler a custom identity. The reason for this is twofold: #. It helps you figure out which jobs are being run where #. It allows crashed jobs to cleared out quicker, as other schedulers aren't allowed to clean them up until the jobs' timeouts expire The best choice would be something that the environment guarantees to be unique among all the scheduler instances but stays the same when the scheduler instance is restarted. For example, on Kubernetes, this would be the name of the pod where the scheduler is running, assuming of course that there is only one scheduler running in each pod against the same data store. Of course, if you're only ever running one scheduler against a persistent data store, you can just use a static scheduler ID. If no ID is explicitly given, the scheduler generates an ID by concatenating the following: * the current host name * the current process ID * the ID of the scheduler instance .. _troubleshooting: Troubleshooting =============== If something isn't working as expected, it will be helpful to increase the logging level of the ``apscheduler`` logger to the ``DEBUG`` level. If you do not yet have logging enabled in the first place, you can do this:: import logging logging.basicConfig() logging.getLogger('apscheduler').setLevel(logging.DEBUG) This should provide lots of useful information about what's going on inside the scheduler and/or worker. Also make sure that you check the :doc:`faq` section to see if your problem already has a solution. Reporting bugs ============== A `bug tracker `_ is provided by GitHub. Getting help ============ If you have problems or other questions, you can either: * Ask in the `apscheduler `_ room on Gitter * Post a question on `GitHub discussions`_, or * Post a question on StackOverflow_ and add the ``apscheduler`` tag .. _GitHub discussions: https://github.com/agronholm/apscheduler/discussions/categories/q-a .. _StackOverflow: http://stackoverflow.com/questions/tagged/apscheduler --- # Source: https://github.com/agronholm/apscheduler/blob/master/docs/versionhistory.rst Version history =============== To find out how to migrate your application from a previous version of APScheduler, see the :doc:`migration section `. **UNRELEASED** - **BREAKING** Switched the MongoDB data store to use the asynchronous API in ``pymongo`` and bumped the minimum ``pymongo`` version to v4.13.0 - Dropped support for Python 3.9 - Fixed an issue where ``CronTrigger`` does not convert ``start_time`` to ``self.timezone`` (`#1061 `_; PR by @jonasitzmann) - Fixed an issue where ``CronTrigger.next()`` returned a non-existing date on a DST change (`#1059 `_; PR by @jonasitzmann) - Fixed jobs that were being run when the scheduler was gracefully stopped being left in an acquired state (`#946 `_) **4.0.0a6** - **BREAKING** Refactored ``AsyncpgEventBroker`` to directly accept a connection string, thus eliminating the need for the ``AsyncpgEventBroker.from_dsn()`` class method - **BREAKING** Added the ``extend_acquired_schedule_leases()`` data store method to prevent other schedulers from acquiring schedules already being processed by a scheduler, if that's taking unexpectedly long for some reason - **BREAKING** Added the ``extend_acquired_job_leases()`` data store method to prevent jobs from being cleaned up as if they had been abandoned (`#864 `_) - **BREAKING** Changed the ``cleanup()`` data store method to also be responsible for releasing jobs whose leases have expired (so the schedulers responsible for them have probably died) - **BREAKING** Changed most attributes in ``Task`` and ``Schedule`` classes to be read-only - **BREAKING** Refactored the ``release_schedules()`` data store method to take a sequence of ``ScheduleResult`` instances instead of a sequence of schedules, to enable the memory data store to handle schedule updates more efficiently - **BREAKING** Replaced the data store ``lock_expiration_delay`` parameter with a new scheduler-level parameter, ``lease_duration`` which is then used to call the various data store methods - **BREAKING** Added the ``job_result_expiration_time`` field to the ``Schedule`` class, to allow the job results from scheduled jobs to stay around for some time (`#927 `_) - **BREAKING** Added an index for the ``created_at`` job field, so acquiring jobs would be faster when there are a lot of them - **BREAKING** Removed the ``job_executor`` and ``max_running_jobs`` parameters from ``add_schedule()`` and ``add_run_job()`` (explicitly configure the task using ``configure_task()`` or by using the new ``@task`` decorator - **BREAKING** Replaced the ``default_job_executor`` scheduler parameter with a more comprehensive ``task_defaults`` parameter - Added the ``@task`` decorator for specifying task configuration parameters bound to a function - **BREAKING** Changed tasks to only function as job templates as well as buckets to limit maximum concurrent job execution - **BREAKING** Changed the ``timezone`` argument to ``CronTrigger.from_crontab()`` into a keyword-only argument - **BREAKING** Added the ``metadata`` field to tasks, schedules and jobs - **BREAKING** Added logic to store ``last_fire_time`` in datastore implementations (PR by @hlobit) - **BREAKING** Added the ``reap_abandoned_jobs()`` abstract method to ``DataStore`` which the scheduler calls before processing any jobs in order to immediately mark jobs left in an acquired state when the scheduler crashed - Added the ``start_time`` and ``end_time`` arguments to ``CronTrigger.from_crontab()`` (`#676 `_) - Added the ``psycopg`` event broker - Added useful indexes and removed useless ones in ``SQLAlchemyDatastore`` and ``MongoDBDataStore`` - Changed the ``lock_expiration_delay`` parameter of built-in data stores to accept a ``timedelta`` as well as ``int`` or ``float`` - Fixed serialization error with ``CronTrigger`` when pausing a schedule (`#864 `_) - Fixed ``TypeError: object NoneType can't be used in 'await' expression`` at teardown of ``SQLAlchemyDataStore`` when it was passed a URL that implicitly created a synchronous engine - Fixed serializers raising their own exceptions instead of ``SerializationError`` and ``DeserializationError`` as appropriate - Fixed ``repr()`` outputs of schedulers, data stores and event brokers to be much more useful and reasonable - Fixed race condition in ``MongoDBDataStore`` that allowed multiple schedulers to acquire the same schedules at once - Changed ``SQLAlchemyDataStore`` to automatically create the explicitly specified schema if it's missing (PR by @zhu0629) - Fixed an issue with ``CronTrigger`` infinitely looping to get next date when DST ends (`#980 `_; PR by @hlobit) - Skip dispatching extend_acquired_job_leases with no jobs (PR by @JacobHayes) - Fixed schedulers not immediately processing schedules that the scheduler left in an acquired state after a crash - Fixed the job lease extension task exiting prematurely while the scheduler is starting (PR by @JacobHayes) - Migrated test and documentation dependencies from extras to dependency groups - Fixed ``add_job()`` overwriting task configuration (PR by @mattewid) **4.0.0a5** - **BREAKING** Added the ``cleanup()`` scheduler method and a configuration option (``cleanup_interval``). A corresponding abstract method was added to the ``DataStore`` class. This method purges expired job results and schedules that have exhausted their triggers and have no more associated jobs running. Previously, schedules were automatically deleted instantly once their triggers could no longer produce any fire times. - **BREAKING** Made publishing ``JobReleased`` events the responsibility of the ``DataStore`` implementation, rather than the scheduler, for consistency with the ``acquire_jobs()`` method - **BREAKING** The ``started_at`` field was moved from ``Job`` to ``JobResult`` - **BREAKING** Removed the ``from_url()`` class methods of ``SQLAlchemyDataStore``, ``MongoDBDataStore`` and ``RedisEventBroker`` in favor of the ability to pass a connection url to the initializer - Added the ability to pause and unpause schedules (PR by @WillDaSilva) - Added the ``scheduled_start`` field to the ``JobAcquired`` event - Added the ``scheduled_start`` and ``started_at`` fields to the ``JobReleased`` event - Fixed large parts of ``MongoDBDataStore`` still calling blocking functions in the event loop thread - Fixed JSON serialization of triggers that had been used at least once - Fixed dialect name checks in the SQLAlchemy job store - Fixed JSON and CBOR serializers unable to serialize enums - Fixed infinite loop in CalendarIntervalTrigger with UTC timezone (PR by unights) - Fixed scheduler not resuming job processing when ``max_concurrent_jobs`` had been reached and then a job was completed, thus making job processing possible again (PR by MohammadAmin Vahedinia) - Fixed the shutdown procedure of the Redis event broker - Fixed ``SQLAlchemyDataStore`` not respecting custom schema name when creating enums - Fixed skipped intervals with overlapping schedules in ``AndTrigger`` (#911 _; PR by Bennett Meares) - Fixed implicitly created client instances in data stores and event brokers not being closed along with the store/broker **4.0.0a4** - **BREAKING** Renamed any leftover fields named ``executor`` to ``job_executor`` (this breaks data store compatibility) - **BREAKING** Switched to using the timezone aware timestamp column type on Oracle - **BREAKING** Fixed precision issue with interval columns on MySQL - **BREAKING** Fixed datetime comparison issues on SQLite and MySQL - **BREAKING** Worked around datetime microsecond precision issue on MongoDB - **BREAKING** Renamed the ``worker_id`` field to ``scheduler_id`` in the ``JobAcquired`` and ``JobReleased`` events - **BREAKING** Added the ``task_id`` attribute to the ``ScheduleAdded``, ``ScheduleUpdated`` and ``ScheduleRemoved`` events - **BREAKING** Added the ``finished`` attribute to the ``ScheduleRemoved`` event - **BREAKING** Added the ``logger`` parameter to ``Datastore.start()`` and ``EventBroker.start()`` to make both use the scheduler's assigned logger - **BREAKING** Made the ``apscheduler.marshalling`` module private - Added the ``configure_task()`` and ``get_tasks()`` scheduler methods - Fixed out of order delivery of events delivered using worker threads - Fixed schedule processing not setting job start deadlines correctly **4.0.0a3** - **BREAKING** The scheduler classes were moved to be importable (only) directly from the ``apscheduler`` package (``apscheduler.Scheduler`` and ``apscheduler.AsyncScheduler``) - **BREAKING** Removed the "tags" field in schedules and jobs (this will be added back when the feature has been fully thought through) - **BREAKING** Removed the ``JobInfo`` class in favor of just using the ``Job`` class (which is now immutable) - **BREAKING** Workers were merged into schedulers. As the ``Worker`` and ``AsyncWorker`` classes have been removed, you now need to pass ``role=SchedulerRole.scheduler`` to the scheduler to prevent it from processing due jobs. The worker event classes (``WorkerEvent``, ``WorkerStarted``, ``WorkerStopped``) have also been removed. - **BREAKING** The synchronous interfaces for event brokers and data stores have been removed. Synchronous libraries can still be used to implement these services through the use of ``anyio.to_thread.run_sync()``. - **BREAKING** The ``current_worker`` context variable has been removed - **BREAKING** The ``current_scheduler`` context variable is now specified to only contain the currently running instance of a **synchronous** scheduler (``apscheduler.Scheduler``). The asynchronous scheduler instance can be fetched from the new ``current_async_scheduler`` context variable, and will always be available when a scheduler is running in the current context, while ``current_scheduler`` is only available when the synchronous wrapper is being run. - **BREAKING** Changed the initialization of data stores and event brokers to use a single ``start()`` method that accepts an ``AsyncExitStack`` (and, depending on the interface, other arguments too) - **BREAKING** Added a concept of "job executors". This determines how the task function is executed once picked up by a worker. Several data structures and scheduler methods have a new field/parameter for this, ``job_executor``. This addition requires database schema changes too. - Dropped support for Python 3.7 - Added support for Python 3.12 - Added the ability to run jobs in worker processes, courtesy of the ``processpool`` executor - Added the ability to run jobs in the Qt event loop via the ``qt`` executor - Added the ``get_jobs()`` scheduler method - The synchronous scheduler now runs an asyncio event loop in a thread, acting as a façade for ``AsyncScheduler`` - Fixed the ``schema`` parameter in ``SQLAlchemyDataStore`` not being applied - Fixed SQLalchemy 2.0 compatibility **4.0.0a2** - **BREAKING** Changed the scheduler API to always require a call to either ``run_until_stopped()`` or ``start_in_background()`` to start the scheduler (using it as a context manager is no longer enough) - **BREAKING** Replaced ``from_asyncpg_pool()`` with ``from_dsn()`` in the asyncpg event broker - Added an async Redis event broker - Added automatic reconnection to the Redis event brokers (sync and async) - Added automatic reconnection to the asyncpg event broker - Changed ``from_async_sqla_engine()`` in asyncpg event broker to only copy the connection options instead of directly using the engine - Simplified the MQTT event broker by providing a default ``client`` instance if omitted - Fixed ``CancelledError`` being reported as a crash on Python 3.7 - Fixed JSON/CBOR serialization of ``JobReleased`` events **4.0.0a1** This was a major rewrite/redesign of most parts of the project. See the :doc:`migration section ` section for details. .. warning:: The v4.0 series is provided as a **pre-release** and may change in a backwards incompatible fashion without any migration pathway, so do NOT use this release in production! - Made persistent data stores shareable between multiple processes and nodes - Enhanced data stores to be more resilient against temporary connectivity failures - Refactored executors (now called *workers*) to pull jobs from the data store so they can be run independently from schedulers - Added full async support (:mod:`asyncio` and Trio_) via AnyIO_ - Added type annotations to the code base - Added the ability to queue jobs directly without scheduling them - Added alternative serializers (CBOR, JSON) - Added the ``CalendarInterval`` trigger - Added the ability to access the current scheduler (under certain circumstances), current worker and the currently running job via context-local variables - Added schedule level support for jitter - Made triggers stateful - Added threshold support for ``AndTrigger`` - Migrated from ``pytz`` time zones to standard library ``zoneinfo`` zones - Allowed a wider range of tzinfo implementations to be used (though ``zoneinfo`` is preferred) - Changed ``IntervalTrigger`` to start immediately instead of first waiting for one interval - Changed ``CronTrigger`` to use Sunday as weekday number 0, as per the crontab standard - Dropped support for Python 2.X, 3.5 and 3.6 - Dropped support for the Qt, Twisted, Tornado and Gevent schedulers - Dropped support for the Redis, RethinkDB and Zookeeper job stores .. _Trio: https://pypi.org/project/trio/ .. _AnyIO: https://github.com/agronholm/anyio **3.9.1** * Removed a leftover check for pytz ``localize()`` and ``normalize()`` methods **3.9.0** - Added support for PySide6 to the Qt scheduler - No longer enforce pytz time zones (support for others is experimental in the 3.x series) - Fixed compatibility with PyMongo 4 - Fixed pytz deprecation warnings - Fixed RuntimeError when shutting down the scheduler from a scheduled job **3.8.1** - Allowed the use of tzlocal v4.0+ in addition to v2.* **3.8.0** - Allowed passing through keyword arguments to the underlying stdlib executors in the thread/process pool executors (PR by Albert Xu) **3.7.0** - Dropped support for Python 3.4 - Added PySide2 support (PR by Abdulla Ibrahim) - Pinned ``tzlocal`` to a version compatible with pytz - Ensured that jitter is always non-negative to prevent triggers from firing more often than intended - Changed ``AsyncIOScheduler`` to obtain the event loop in ``start()`` instead of ``__init__()``, to prevent situations where the scheduler won't run because it's using a different event loop than then one currently running - Made it possible to create weak references to ``Job`` instances - Made the schedulers explicitly raise a descriptive ``TypeError`` when serialization is attempted - Fixed Zookeeper job store using backslashes instead of forward slashes for paths on Windows (PR by Laurel-rao) - Fixed deprecation warnings on the MongoDB job store and increased the minimum PyMongo version to 3.0 - Fixed ``BlockingScheduler`` and ``BackgroundScheduler`` shutdown hanging after the user has erroneously tried to start it twice - Fixed memory leak when coroutine jobs raise exceptions (due to reference cycles in tracebacks) - Fixed inability to schedule wrapped functions with extra arguments when the wrapped function cannot accept them but the wrapper can (original PR by Egor Malykh) - Fixed potential ``where`` clause error in the SQLAlchemy job store when a subclass uses more than one search condition - Fixed a problem where bound methods added as jobs via textual references were called with an unwanted extra ``self`` argument (PR by Pengjie Song) - Fixed ``BrokenPoolError`` in ``ProcessPoolExecutor`` so that it will automatically replace the broken pool with a fresh instance **3.6.3** - Fixed Python 2.7 accidentally depending on the ``trollius`` package (regression from v3.6.2) **3.6.2** - Fixed handling of :func:`~functools.partial` wrapped coroutine functions in ``AsyncIOExecutor`` and ``TornadoExecutor`` (PR by shipmints) **3.6.1** - Fixed OverflowError on Qt scheduler when the wait time is very long - Fixed methods inherited from base class could not be executed by processpool executor (PR by Yang Jian) **3.6.0** - Adapted ``RedisJobStore`` to v3.0 of the ``redis`` library - Adapted ``RethinkDBJobStore`` to v2.4 of the ``rethink`` library - Fixed ``DeprecationWarnings`` about ``collections.abc`` on Python 3.7 (PR by Roman Levin) **3.5.3** - Fixed regression introduced in 3.5.2: Class methods were mistaken for instance methods and thus were broken during serialization - Fixed callable name detection for methods in old style classes **3.5.2** - Fixed scheduling of bound methods on persistent job stores (the workaround of scheduling ``YourClass.methodname`` along with an explicit ``self`` argument is no longer necessary as this is now done automatically for you) - Added the FAQ section to the docs - Made ``BaseScheduler.start()`` raise a ``RuntimeError`` if running under uWSGI with threads disabled **3.5.1** - Fixed ``OverflowError`` on Windows when the wait time is too long - Fixed ``CronTrigger`` sometimes producing fire times beyond ``end_date`` when jitter is enabled (thanks to gilbsgilbs for the tests) - Fixed ISO 8601 UTC offset information being silently discarded from string formatted datetimes by adding support for parsing them **3.5.0** - Added the ``engine_options`` option to ``SQLAlchemyJobStore`` - Added the ``jitter`` options to ``IntervalTrigger`` and ``CronTrigger`` (thanks to gilbsgilbs) - Added combining triggers (``AndTrigger`` and ``OrTrigger``) - Added better validation for the steps and ranges of different expressions in ``CronTrigger`` - Added support for named months (``jan`` – ``dec``) in ``CronTrigger`` month expressions - Added support for creating a ``CronTrigger`` from a crontab expression - Allowed spaces around commas in ``CronTrigger`` fields - Fixed memory leak due to a cyclic reference when jobs raise exceptions (thanks to gilbsgilbs for help on solving this) - Fixed passing ``wait=True`` to ``AsyncIOScheduler.shutdown()`` (although it doesn't do much) - Cancel all pending futures when ``AsyncIOExecutor`` is shut down **3.4.0** - Dropped support for Python 3.3 - Added the ability to specify the table schema for ``SQLAlchemyJobStore`` (thanks to Meir Tseitlin) - Added a workaround for the ``ImportError`` when used with PyInstaller and the likes (caused by the missing packaging metadata when APScheduler is packaged with these tools) **3.3.1** - Fixed Python 2.7 compatibility in ``TornadoExecutor`` **3.3.0** - The asyncio and Tornado schedulers can now run jobs targeting coroutine functions (requires Python 3.5; only native coroutines (``async def``) are supported) - The Tornado scheduler now uses TornadoExecutor as its default executor (see above as for why) - Added ZooKeeper job store (thanks to Jose Ignacio Villar for the patch) - Fixed job store failure (``get_due_jobs()``) causing the scheduler main loop to exit (it now waits a configurable number of seconds before retrying) - Fixed ``@scheduled_job`` not working when serialization is required (persistent job stores and ``ProcessPoolScheduler``) - Improved import logic in ``ref_to_obj()`` to avoid errors in cases where traversing the path with ``getattr()`` would not work (thanks to Jarek Glowacki for the patch) - Fixed CronTrigger's weekday position expressions failing on Python 3 - Fixed CronTrigger's range expressions sometimes allowing values outside the given range **3.2.0** - Added the ability to pause and unpause the scheduler - Fixed pickling problems with persistent jobs when upgrading from 3.0.x - Fixed AttributeError when importing apscheduler with setuptools < 11.0 - Fixed some events missing from ``apscheduler.events.__all__`` and ``apscheduler.events.EVENTS_ALL`` - Fixed wrong run time being set for date trigger when the timezone isn't the same as the local one - Fixed builtin ``id()`` erroneously used in MongoDBJobStore's ``JobLookupError()`` - Fixed endless loop with CronTrigger that may occur when the computer's clock resolution is too low (thanks to Jinping Bai for the patch) **3.1.0** - Added RethinkDB job store (contributed by Allen Sanabria) - Added method chaining to the ``modify_job()``, ``reschedule_job()``, ``pause_job()`` and ``resume_job()`` methods in ``BaseScheduler`` and the corresponding methods in the ``Job`` class - Added the EVENT_JOB_SUBMITTED event that indicates a job has been submitted to its executor. - Added the EVENT_JOB_MAX_INSTANCES event that indicates a job's execution was skipped due to its maximum number of concurrently running instances being reached - Added the time zone to the repr() output of ``CronTrigger`` and ``IntervalTrigger`` - Fixed rare race condition on scheduler ``shutdown()`` - Dropped official support for CPython 2.6 and 3.2 and PyPy3 - Moved the connection logic in database backed job stores to the ``start()`` method - Migrated to setuptools_scm for versioning - Deprecated the various version related variables in the ``apscheduler`` module (``apscheduler.version_info``, ``apscheduler.version``, ``apscheduler.release``, ``apscheduler.__version__``) **3.0.6** - Fixed bug in the cron trigger that produced off-by-1-hour datetimes when crossing the daylight saving threshold (thanks to Tim Strazny for reporting) **3.0.5** - Fixed cron trigger always coalescing missed run times into a single run time (contributed by Chao Liu) - Fixed infinite loop in the cron trigger when an out-of-bounds value was given in an expression - Fixed debug logging displaying the next wakeup time in the UTC timezone instead of the scheduler's configured timezone - Allowed unicode function references in Python 2 **3.0.4** - Fixed memory leak in the base executor class (contributed by Stefan Nordhausen) **3.0.3** - Fixed compatibility with pymongo 3.0 **3.0.2** - Fixed ValueError when the target callable has a default keyword argument that wasn't overridden - Fixed wrong job sort order in some job stores - Fixed exception when loading all jobs from the redis job store when there are paused jobs in it - Fixed AttributeError when printing a job list when there were pending jobs - Added setuptools as an explicit requirement in install requirements **3.0.1** - A wider variety of target callables can now be scheduled so that the jobs are still serializable (static methods on Python 3.3+, unbound methods on all except Python 3.2) - Attempting to serialize a non-serializable Job now raises a helpful exception during serialization. Thanks to Jeremy Morgan for pointing this out. - Fixed table creation with SQLAlchemyJobStore on MySQL/InnoDB - Fixed start date getting set too far in the future with a timezone different from the local one - Fixed _run_job_error() being called with the incorrect number of arguments in most executors **3.0.0** - Added support for timezones (special thanks to Curtis Vogt for help with this one) - Split the old Scheduler class into BlockingScheduler and BackgroundScheduler and added integration for asyncio (PEP 3156), Gevent, Tornado, Twisted and Qt event loops - Overhauled the job store system for much better scalability - Added the ability to modify, reschedule, pause and resume jobs - Dropped the Shelve job store because it could not work with the new job store system - Dropped the max_runs option and run counting of jobs since it could not be implemented reliably - Adding jobs is now done exclusively through ``add_job()`` -- the shortcuts to triggers were removed - Added the ``end_date`` parameter to cron and interval triggers - It is now possible to add a job directly to an executor without scheduling, by omitting the trigger argument - Replaced the thread pool with a pluggable executor system - Added support for running jobs in subprocesses (via the ``processpool`` executor) - Switched from nose to py.test for running unit tests **2.1.0** - Added Redis job store - Added a "standalone" mode that runs the scheduler in the calling thread - Fixed disk synchronization in ShelveJobStore - Switched to PyPy 1.9 for PyPy compatibility testing - Dropped Python 2.4 support - Fixed SQLAlchemy 0.8 compatibility in SQLAlchemyJobStore - Various documentation improvements **2.0.3** - The scheduler now closes the job store that is being removed, and all job stores on shutdown() by default - Added the ``last`` expression in the day field of CronTrigger (thanks rcaselli) - Raise a TypeError when fields with invalid names are passed to CronTrigger (thanks Christy O'Reilly) - Fixed the persistent.py example by shutting down the scheduler on Ctrl+C - Added PyPy 1.8 and CPython 3.3 to the test suite - Dropped PyPy 1.4 - 1.5 and CPython 3.1 from the test suite - Updated setup.cfg for compatibility with distutils2/packaging - Examples, documentation sources and unit tests are now packaged in the source distribution **2.0.2** - Removed the unique constraint from the "name" column in the SQLAlchemy job store - Fixed output from Scheduler.print_jobs() which did not previously output a line ending at the end **2.0.1** - Fixed cron style jobs getting wrong default values **2.0.0** - Added configurable job stores with several persistent back-ends (shelve, SQLAlchemy and MongoDB) - Added the possibility to listen for job events (execution, error, misfire, finish) on a scheduler - Added an optional start time for cron-style jobs - Added optional job execution coalescing for situations where several executions of the job are due - Added an option to limit the maximum number of concurrently executing instances of the job - Allowed configuration of misfire grace times on a per-job basis - Allowed jobs to be explicitly named - All triggers now accept dates in string form (YYYY-mm-dd HH:MM:SS) - Jobs are now run in a thread pool; you can either supply your own PEP 3148 compliant thread pool or let APScheduler create its own - Maximum run count can be configured for all jobs, not just those using interval-based scheduling - Fixed a v1.x design flaw that caused jobs to be executed twice when the scheduler thread was woken up while still within the allowable range of their previous execution time (issues #5, #7) - Changed defaults for cron-style jobs to be more intuitive -- it will now default to all minimum values for fields lower than the least significant explicitly defined field **1.3.1** - Fixed time difference calculation to take into account shifts to and from daylight saving time **1.3.0** - Added __repr__() implementations to expressions, fields, triggers, and jobs to help with debugging - Added the dump_jobs method on Scheduler, which gives a helpful listing of all jobs scheduled on it - Fixed positional weekday (3th fri etc.) expressions not working except in some edge cases (fixes #2) - Removed autogenerated API documentation for modules which are not part of the public API, as it might confuse some users .. Note:: Positional weekdays are now used with the **day** field, not **weekday**. **1.2.1** - Fixed regression: add_cron_job() in Scheduler was creating a CronTrigger with the wrong parameters (fixes #1, #3) - Fixed: if the scheduler is restarted, clear the "stopped" flag to allow jobs to be scheduled again **1.2.0** - Added the ``week`` option for cron schedules - Added the ``daemonic`` configuration option - Fixed a bug in cron expression lists that could cause valid firing times to be missed - Fixed unscheduling bound methods via unschedule_func() - Changed CronTrigger constructor argument names to match those in Scheduler **1.01** - Fixed a corner case where the combination of hour and day_of_week parameters would cause incorrect timing for a cron trigger