Job Types
This guide outlines different job types.
Solidity cron jobs
Executes a job on a schedule. Does not rely on any kind of external trigger.
Spec format
type = "cron"
schemaVersion = 1
evmChainID = 1
schedule = "CRON_TZ=UTC * */20 * * * *"
externalJobID = "0EEC7E1D-D0D2-476C-A1A8-72DFB6633F01"
observationSource = """
fetch [type="http" method=GET url="https://chain.link/ETH-USD"]
parse [type="jsonparse" path="data,price"]
multiply [type="multiply" times=100]
fetch -> parse -> multiply
"""
Shared fields
See shared fields.
Unique fields
schedule
: the frequency with which the job is to be run. There are two ways to specify this:- Traditional UNIX cron format, but with 6 fields, not 5. The extra field allows for "seconds" granularity. Note: you must specify the
CRON_TZ=...
parameter if you use this format. @
shorthand, e.g.@every 1h
. This shorthand does not take account of the node's timezone, rather, it simply begins counting down the moment that the job is added to the node (or the node is rebooted). As such, noCRON_TZ
parameter is needed.
- Traditional UNIX cron format, but with 6 fields, not 5. The extra field allows for "seconds" granularity. Note: you must specify the
For all supported schedules, please refer to the cron library documentation.
Job type specific pipeline variables
$(jobSpec.databaseID)
: the ID of the job spec in the local database. You shouldn't need this in 99% of cases.$(jobSpec.externalJobID)
: the globally-unique job ID for this job. Used to coordinate between node operators in certain cases.$(jobSpec.name)
: the local name of the job.$(jobRun.meta)
: a map of metadata that can be sent to a bridge, etc.
Direct request jobs
Executes a job upon receipt of an explicit request made by a user. The request is detected via a log emitted by an Oracle or Operator contract. This is similar to the legacy ethlog/runlog style of jobs.
Spec format
type = "directrequest"
schemaVersion = 1
evmChainID = 1
name = "example eth request event spec"
contractAddress = "0x613a38AC1659769640aaE063C651F48E0250454C"
# Optional fields:
# requesters = [
# "0xaaaa1F8ee20f5565510B84f9353F1E333E753B7a",
# "0xbbbb70F0e81C6F3430dfdC9fa02fB22BdD818C4e"
# ]
# minContractPaymentLinkJuels = "100000000000000"
# externalJobID = "0EEC7E1D-D0D2-476C-A1A8-72DFB6633F02"
# minIncomingConfirmations = 10
observationSource = """
ds [type="http" method=GET url="http://example.com"]
ds_parse [type="jsonparse" path="USD"]
ds_multiply [type="multiply" times=100]
ds -> ds_parse -> ds_multiply
"""
Shared fields
See shared fields.
Unique fields
contractAddress
: The Oracle or Operator contract to monitor for requestsrequesters
: Optional - Allows whitelisting requestersminContractPaymentLinkJuels
Optional - Allows you to specify a job-specific minimum contract paymentminIncomingConfirmations
Optional - Allows you to specify a job-specificMIN_INCOMING_CONFIRMATIONS
value, must be greater than globalMIN_INCOMING_CONFIRMATIONS
Job type specific pipeline variables
$(jobSpec.databaseID)
: the ID of the job spec in the local database. You shouldn't need this in 99% of cases.$(jobSpec.externalJobID)
: the globally-unique job ID for this job. Used to coordinate between node operators in certain cases.$(jobSpec.name)
: the local name of the job.$(jobRun.meta)
: a map of metadata that can be sent to a bridge, etc.$(jobRun.logBlockHash)
: the block hash in which the initiating log was received.$(jobRun.logBlockNumber)
: the block number in which the initiating log was received.$(jobRun.logTxHash)
: the transaction hash that generated the initiating log.$(jobRun.logAddress)
: the address of the contract to which the initiating transaction was sent.$(jobRun.logTopics)
: the log's topics (indexed
fields).$(jobRun.logData)
: the log's data (non-indexed
fields).$(jobRun.blockReceiptsRoot)
: the root of the receipts trie of the block (hash).$(jobRun.blockTransactionsRoot)
: the root of the transaction trie of the block (hash).$(jobRun.blockStateRoot)
: the root of the final state trie of the block (hash).
Examples
Get > Uint256 job
Let's assume that a user makes a request to an oracle to call a public API, retrieve a number from the response, remove any decimals and return uint256.
Get > Int256 job
Let's assume that a user makes a request to an oracle to call a public API, retrieve a number from the response, remove any decimals and return int256.
- The job spec example can be found here.
Get > Bool job
Let's assume that a user makes a request to an oracle to call a public API, retrieve a boolean from the response and return bool.
- The job spec example can be found here.
Get > String job
Let's assume that a user makes a request to an oracle and would like to fetch a string from the response.
Get > Bytes job
Let's assume that a user makes a request to an oracle and would like to fetch bytes from the response (meaning a response that contains an arbitrary-length raw byte data).
Multi-Word job
Let's assume that a user makes a request to an oracle and would like to fetch multiple words in one single request.
Existing job
Using an existing Oracle Job makes your smart contract code more succinct. Let's assume that a user makes a request to an oracle that leverages Etherscan External Adapter to retrieve the gas price.
Flux Monitor Jobs
The Flux Monitor job type is for continually-updating data feeds that aggregate responses from multiple oracles. The oracles servicing the feed submit rounds based on several triggers:
- An occasional poll, which must show that there has been sufficient deviation from an off-chain data source before a new result is submitted
- New rounds initiated by other oracles on the feeds. If another oracle notices sufficient deviation, all other oracles will submit their current observations as well.
- A heartbeat, which ensures that even if no deviation occurs, we submit a new result to prove liveness. This can take one of two forms:
- The "idle timer", which begins counting down each time a round is started
- The "drumbeat", which simply ticks at a steady interval, much like a
cron
job
Spec format
type = "fluxmonitor"
schemaVersion = 1
name = "example flux monitor spec"
contractAddress = "0x3cCad4715152693fE3BC4460591e3D3Fbd071b42"
externalJobID = "0EEC7E1D-D0D2-476C-A1A8-72DFB6633F03"
threshold = 0.5
absoluteThreshold = 0.0 # optional
idleTimerPeriod = "1s"
idleTimerDisabled = false
pollTimerPeriod = "1m"
pollTimerDisabled = false
drumbeatEnabled = true
drumbeatSchedule = "CRON_TZ=UTC * */20 * * * *"
observationSource = """
// data source 1
ds1 [type="http" method=GET url="https://pricesource1.com"
requestData="{\\"coin\\": \\"ETH\\", \\"market\\": \\"USD\\"}"]
ds1_parse [type="jsonparse" path="data,result"]
// data source 2
ds2 [type="http" method=GET url="https://pricesource2.com"
requestData="{\\"coin\\": \\"ETH\\", \\"market\\": \\"USD\\"}"]
ds2_parse [type="jsonparse" path="data,result"]
ds1 -> ds1_parse -> medianized_answer
ds2 -> ds2_parse -> medianized_answer
medianized_answer [type=median]
"""
Shared fields
See shared fields.
Unique fields
contractAddress
: the address of the FluxAggregator contract that manages the feed.threshold
: the percentage threshold of deviation from the previous on-chain answer that must be observed before a new set of observations are submitted to the contract.absoluteThreshold
: the absolute numerical deviation that must be observed from the previous on-chain answer before a new set of observations are submitted to the contract. This is primarily useful with data that can legitimately sometimes hit 0, as it's impossible to calculate a percentage deviation from 0.idleTimerPeriod
: the amount of time (after the start of the last round) after which a new round will be automatically initiated, regardless of any observed off-chain deviation.idleTimerDisabled
: whether the idle timer is used to trigger new rounds.drumbeatEnabled
: whether the drumbeat is used to trigger new rounds.drumbeatSchedule
: the cron schedule of the drumbeat. This field supports the same syntax as the cron job type (see the cron library documentation for details). CRON_TZ is required.pollTimerPeriod
: the frequency with which the off-chain data source is checked for deviation against the previously submitted on-chain answer.pollTimerDisabled
: whether the occasional deviation check is used to trigger new rounds.- Notes:
- For duration parameters, the maximum unit of time is
h
(hour). Durations of a day or longer must be expressed in hours. - If no time unit is provided, the default unit is nanoseconds, which is almost never what you want.
- For duration parameters, the maximum unit of time is
Job type specific pipeline variables
$(jobSpec.databaseID)
: the ID of the job spec in the local database. You shouldn't need this in 99% of cases.$(jobSpec.externalJobID)
: the globally-unique job ID for this job. Used to coordinate between node operators in certain cases.$(jobSpec.name)
: the local name of the job.$(jobRun.meta)
: a map of metadata that can be sent to a bridge, etc.
Keeper jobs
Keeper jobs occasionally poll a smart contract method that expresses whether something in the contract is ready for some on-chain action to be performed. When it's ready, the job executes that on-chain action.
Examples:
- Liquidations
- Rebalancing portfolios
- Rebase token supply adjustments
- Auto-compounding
- Limit orders
Spec format
type = "keeper"
schemaVersion = 1
evmChainID = 1
name = "example keeper spec"
contractAddress = "0x7b3EC232b08BD7b4b3305BE0C044D907B2DF960B"
fromAddress = "0xa8037A20989AFcBC51798de9762b351D63ff462e"
Shared fields
See shared fields.
Unique fields
evmChainID
: The numeric chain ID of the chain on which Chainlink Automation Registry is deployedcontractAddress
: The address of the Chainlink Automation Registry contract to poll and updatefromAddress
: The Oracle node address from which to send updatesexternalJobID
: This is an optional field. When omitted it will be generated
Off-chain reporting jobs
OCR jobs (off-chain reporting jobs) are used very similarly to Flux Monitor jobs. They update data feeds with aggregated data from many Chainlink oracle nodes. However, they do this aggregation using a cryptographically-secure off-chain protocol that makes it possible for only a single node to submit all answers from all participating nodes during each round (with proofs that the other nodes' answers were legitimately provided by those nodes), which saves a significant amount of gas.
Off-chain reporting jobs require the FEATURE_OFFCHAIN_REPORTING=true
environment variable.
Bootstrap node
Every OCR cluster requires at least one bootstrap node as a kind of "rallying point" that enables the other nodes to find one another. Bootstrap nodes do not participate in the aggregation protocol and do not submit answers to the feed.
Spec format
type = "offchainreporting"
schemaVersion = 1
evmChainID = 1
contractAddress = "0x27548a32b9aD5D64c5945EaE9Da5337bc3169D15"
p2pBootstrapPeers = [
"/dns4/chain.link/tcp/1234/p2p/16Uiu2HAm58SP7UL8zsnpeuwHfytLocaqgnyaYKP8wu7qRdrixLju",
]
isBootstrapPeer = true
externalJobID = "0EEC7E1D-D0D2-476C-A1A8-72DFB6633F05"
Shared fields
See shared fields.
Unique fields
contractAddress
: The address of theOffchainReportingAggregator
contract.evmChainID
: The chain ID of the EVM chain in which the job will operate.p2pBootstrapPeers
: A list of libp2p dial addresses of the other bootstrap nodes helping oracle nodes find one another on the network. It is used with P2P networking stack V1 as follows:p2pBootstrapPeers = [ "/dns4/<host name or ip>/tcp/<port>/p2p/<bootstrap node's P2P ID>" ]
p2pv2Boostrappers
: A list of libp2p dial addresses of the other bootstrap nodes helping oracle nodes find one another on the network. It is used with P2P networking stack V2 as follows:p2pv2Boostrappers = [ "<bootstrap node's P2P ID>@<host name or ip>:<port>" ]
isBootstrapPeer
: This must be set totrue
.
Job type specific pipeline variables
$(jobSpec.databaseID)
: The ID of the job spec in the local database. You shouldn't need this in 99% of cases.$(jobSpec.externalJobID)
: The globally-unique job ID for this job. Used to coordinate between node operators in certain cases.$(jobSpec.name)
: The local name of the job.$(jobRun.meta)
: A map of metadata that can be sent to a bridge, etc.
Oracle node
Oracle nodes, on the other hand, are responsible for submitting answers.
Spec format
type = "offchainreporting"
schemaVersion = 1
evmChainID = 1
name = "OCR: ETH/USD"
contractAddress = "0x613a38AC1659769640aaE063C651F48E0250454C"
externalJobID = "0EEC7E1D-D0D2-476C-A1A8-72DFB6633F06"
p2pPeerID = "12D3KooWApUJaQB2saFjyEUfq6BmysnsSnhLnY5CF9tURYVKgoXK"
p2pBootstrapPeers = [
"/dns4/chain.link/tcp/1234/p2p/16Uiu2HAm58SP7UL8zsnpeuwHfytLocaqgnyaYKP8wu7qRdrixLju",
]
isBootstrapPeer = false
keyBundleID = "7f993fb701b3410b1f6e8d4d93a7462754d24609b9b31a4fe64a0cb475a4d934"
monitoringEndpoint = "chain.link:4321"
transmitterAddress = "0xF67D0290337bca0847005C7ffD1BC75BA9AAE6e4"
observationTimeout = "10s"
blockchainTimeout = "20s"
contractConfigTrackerSubscribeInterval = "2m"
contractConfigTrackerPollInterval = "1m"
contractConfigConfirmations = 3
observationSource = """
// data source 1
ds1 [type="bridge" name=eth_usd]
ds1_parse [type="jsonparse" path="one,two"]
ds1_multiply [type="multiply" times=100]
// data source 2
ds2 [type="http" method=GET url="https://chain.link/eth_usd"
requestData="{\\"hi\\": \\"hello\\"}"]
ds2_parse [type="jsonparse" path="three,four"]
ds2_multiply [type="multiply" times=100]
ds1 -> ds1_parse -> ds1_multiply -> answer
ds2 -> ds2_parse -> ds2_multiply -> answer
answer [type=median]
"""
Shared fields
See shared fields.
Unique fields
contractAddress
: The address of theOffchainReportingAggregator
contract.evmChainID
: The chain ID of the EVM chain in which the job will operate.p2pPeerID
: The base58-encoded libp2p public key of this node.p2pBootstrapPeers
: A list of libp2p dial addresses of the other bootstrap nodes helping oracle nodes find one another on the network. It is used with P2P networking stack V1 as follows:p2pBootstrapPeers = [ "/dns4/<host name or ip>/tcp/<port>/p2p/<bootstrap node's P2P ID>" ]
p2pv2Boostrappers
: A list of libp2p dial addresses of the other bootstrap nodes helping oracle nodes find one another on the network. It is used with P2P networking stack V2 as follows:p2pv2Boostrappers = [ "<bootstrap node's P2P ID>@<host name or ip>:<port>" ]
keyBundleID
: The hash of the OCR key bundle to be used by this node. The Chainlink node keystore manages these key bundles. Use the node Key Management UI or thechainlink keys ocr
sub-commands in the CLI to create and manage key bundles.monitoringEndpoint
: The URL of the telemetry endpoint to send OCR metrics to.transmitterAddress
: The Ethereum address from which to send aggregated submissions to the OCR contract.observationTimeout
: The maximum duration to wait before an off-chain request for data is considered to be failed/unfulfillable.blockchainTimeout
: The maximum duration to wait before an on-chain request for data is considered to be failed/unfulfillable.contractConfigTrackerSubscribeInterval
: The interval at which to retry subscribing to on-chain config changes if a subscription has not yet successfully been made.contractConfigTrackerPollInterval
: The interval at which to proactively poll the on-chain config for changes.contractConfigConfirmations
: The number of blocks to wait after an on-chain config change before considering it worthy of acting upon.
Job type specific pipeline variables
$(jobSpec.databaseID)
: The ID of the job spec in the local database. You shouldn't need this in 99% of cases.$(jobSpec.externalJobID)
: The globally-unique job ID for this job. Used to coordinate between node operators in certain cases.$(jobSpec.name)
: The local name of the job.$(jobRun.meta)
: A map of metadata that can be sent to a bridge, etc.
Webhook Jobs
Webhook jobs can be initiated by HTTP request, either by a user or external initiator.
:::note
You'll need FEATURE_WEBHOOK_V2=true
in your .env
file.
:::
This is an example webhook job:
type = "webhook"
schemaVersion = 1
externalInitiators = [
{ name = "my-external-initiator-1", spec = "{\"foo\": 42}" },
{ name = "my-external-initiator-2", spec = "{}" }
]
observationSource = """
parse_request [type="jsonparse" path="data,result" data="$(jobRun.requestBody)"]
multiply [type="multiply" input="$(parse_request)" times="100"]
send_to_bridge [type="bridge" name="my_bridge" requestData="{ \\"result\\": $(multiply) }"]
parse_request -> multiply -> send_to_bridge
"""
All webhook jobs can have runs triggered by a logged in user.
Webhook jobs may additionally specify zero or more external initiators, which can also trigger runs for this job. The name must exactly match the name of the referred external initiator. The external initiator definition here must contain a spec
which defines the JSON payload that will be sent to the External Initiator on job creation if the external initiator has a URL. If you don't care about the spec, you can simply use the empty JSON object.
Unique fields
externalInitiators
: an array of{name, spec}
objects, wherename
is the name registered with the node, andspec
is the job spec to be forwarded to the external initiator when it is created.
Shared fields
See shared fields.
Job type specific pipeline variables
$(jobSpec.databaseID)
: the ID of the job spec in the local database. You shouldn't need this in 99% of cases.$(jobSpec.externalJobID)
: the globally-unique job ID for this job. Used to coordinate between node operators in certain cases.$(jobSpec.name)
: the local name of the job.$(jobRun.meta)
: a map of metadata that can be sent to a bridge, etc.$(jobRun.requestBody)
: the body of the request that initiated the job run.