The documentation content home for https://docs.quantum.ibm.com (excluding API reference).
Maintaining up-to-date documentation is a huge challenge for any software project, especially in a field like quantum computing where the pace at which advances in new research and technological capabilities happen incredibly fast. As a result, we greatly appreciate any who take the time to support us in keeping this content accurate and up to the highest quality standard possible to benefit the broadest range of users.
Read on for more information about how to support this project:
This is the quickest, easiest, and most helpful way to contribute to this project and improve the quality of Qiskit® and IBM Quantum™ documentation. There are a few different ways to report issues, depending on where it was found:
- For problems you've found in the Qiskit SDK API Reference section, open an issue in the Qiskit repo here.
- For problems you've found in the Qiskit Runtime client section, open an issue in the Qiskit IBM Runtime repo here.
- For problems you've found in any other section of docs, open a content bug issue here.
If you think there are gaps in our documentation, or sections that could be expanded upon, we invite you to open a new content request issue here.
Not every new content suggestion is a good fit for docs, nor are we able to prioritize every request immediately. However, we will do our best to respond to content requests in a timely manner, and we greatly appreciate our community's efforts in generating new ideas.
If you are interested in writing the new content yourself, or already have some draft work you think could be integrated, please also mention that in the issue description. If your content suggestion is accepted, we will let you know and work with you to get the content written and reviewed.
Please note: we DO NOT accept unsolicited PRs for new pages or large updates to existing content. The content that we include in docs is carefully planned and curated by our content team and must go through the appropriate review process to ensure the quality is of the highest possible standard before deploying to production. As a result we are very selective with which content suggestions are approved, and it is unlikely that PRs submitted without an associated approved content request will be accepted.
You can help the team prioritize already-open issues by doing the following:
- For bug reports, leave a comment in the issue if you have also been experiencing the same problem and can reproduce it (include as much information as you can, e.g., browser type, Qiskit version, etc.).
- For new content requests, leave a comment or upvote (👍) in the issue if you also would like to see that new content added.
You can look through the open issues we have in this repo and address them with a PR. We recommend focusing on issues with the "good first issue" label.
Before getting started on an issue, remember to do the following:
- Read the Code of Conduct
- Check for open, unassigned issues with the "good first issue" label
- Select an issue that is not already assigned to someone and leave a comment to request to be assigned
Once you have an issue to work on, see the "How to work with this repo" section below to get going, then open a PR.
Before opening a PR, remember to do the following:
- Check that you have addressed all the requirements from the original issue
- Run the quality control checks with
npm run check
- Use the GitHub "fixes" notation to link your PR to the issue you are addressing
These tools will also run in CI, but it can be convenient when iterating to run the tools locally.
First, install the below software:
- Node.js. If you expect to use JavaScript in other projects, consider using NVM. Otherwise, consider using Homebrew or installing Node.js directly.
- Docker. You must also ensure that it is running.
- If you cannot use Docker from docker.com, consider using use Colima or Rancher Desktop. When installing Rancher Desktop, choose Moby/Dockerd as the engine, rather than nerdctl. To ensure it's running, open up the app "Rancher Desktop".
Then, install the dependencies with:
npm install
You can preview the docs locally by following these two steps:
- Ensure Docker is running. For example, open Rancher Desktop.
- Run
./start
in your terminal, and open http://localhost:3000 in your browser.- On Windows, run
python start
instead. Alternatively, use Windows Subsystem for Linux and run./start
.
- On Windows, run
The preview application does not include the top nav bar. Instead, navigate to the folder you want with the links in the home page. You can return to the home page at any time by clicking "IBM Quantum Documentation Preview" in the top-left of the header.
Maintainers: when you release a new version of the image, you need to update the image digest in ./start
by following the instructions at the top of the file and opening a pull request.
API docs authors can preview their changes to one of the APIs by using the -a
parameter to specify the path to the docs folder:
- Run
npm run gen-api -- -p <pkg-name> -v <version> -a <path/to/docs/_build/html>
. - Execute
./start
and open uphttp://localhost:3000
, as explained in the prior section.
When adding a new notebook, you'll need to tell the testing tools how to handle it.
To do this, add the file path to scripts/config/notebook-testing.toml
. There are
four categories:
notebooks_normal_test
: Notebooks to be run normally in CI. These notebooks can't submit jobs as the queue times are too long and it will waste resources. You can interact with IBM Quantum to retrieve jobs and backend information.notebooks_that_submit_jobs
: Notebooks that submit jobs, but that are small enough to run on a 5-qubit simulator. We will test these notebooks in CI by patchingleast_busy
to return a 5-qubit fake backend.notebooks_no_mock
: For notebooks that can't be tested using the 5-qubit simulator patch. We skip testing these in CI and instead run them twice per month. Any notebooks with cells that take more than five minutes to run are also deemed too big for CI. Try to avoid adding notebooks to this category if possible.notebooks_exclude
: Notebooks to be ignored.
If your notebook uses the latex circuit drawer (qc.draw("latex")
), you must
also add it to the "Check for notebooks that require LaTeX" step in
.github/workflows/notebook-test.yml
.
If you don't do this step, you will get the error "FAILED scripts/nb-tester/test/test_notebook_classification.py::test_all_notebooks_are_classified".
Add a new markdown cell under your title with a version-info
tag.
When you execute the notebook (see the next section), the script will populate
this cell with the package versions so users can reproduce the results.
Before submitting a new notebook or code changes to a notebook, you must run
the notebook using tox -- --write <path-to-notebook>
and commit the results.
If the notebook submits jobs, also use the argument --submit-jobs
. This means
we can be sure all notebooks work and that users will see the same results when
they run using the environment we recommend.
To execute notebooks in a fixed Python environment, first install tox
using
pipx:
pipx install tox
You also need to install a few system dependencies: TeX, Poppler, and graphviz.
On macOS, you can run brew install mactex-no-gui poppler graphviz
. On Ubuntu,
you can run apt-get install texlive-pictures texlive-latex-extra poppler-utils graphviz
.
- To execute all notebooks, run tox.
tox
- To only execute specific notebooks, pass them as arguments.
tox -- path/to/notebook.ipynb path/to/another-notbook.ipynb
- To write the execution results to the file, pass the
--write
argument.tox -- optional/paths/to/notebooks.ipynb --write
When you make a pull request changing a notebook that doesn't submit jobs, you can get a version of that notebook that was executed by tox from CI. To do this, click "Show all checks" in the info box at the bottom of the pull request page on GitHub, then choose "Details" for the "Test notebooks" job. From the job page, click "Summary", then download "Executed notebooks". Otherwise, if your notebook does submit jobs, you need to run it locally using the steps mentioned earlier.
We don't want users to see warnings that can be avoided, so it's best to fix
the code to avoid them. However, if a warning is unavoidable, you can stop it
blocking CI by adding an ignore-warnings
tag to the cell. In VSCode,
right-click the cell, choose "Add cell tag", type ignore-warnings
, then press
"Enter". In Jupyter notebook (depending on version), choose View > Right
Sidebar > Show Notebook Tools, then under "Common Tools" add a tag with text
ignore-warnings
.
Our CI checks notebooks run from start to finish without errors or warnings. You can add extra checks in notebooks to catch other unexpected behavior.
For example, say we claim a cell always returns the string 0011
. It would be
embarassing if this was not true. We can assert this in CI by adding the
following code cell, and hide it from users with a remove-cell
tag.
# Confirm output is what we expect.
assert _ == '0011'
In Jupyter notebooks, the underscore _
variable stores the value of the
previous cell output. You should also add a comment like
# Confirm output is what we expect
so that authors know this
block is only for testing. Make sure you add the tag remove-cell
.
If something ever causes this value to
change, CI will alert us.
We use squeaky
and
ruff
to lint our notebooks. First install
tox
using pipx.
pipx install tox
To check if a notebook needs linting:
# Check all notebooks in ./docs
tox -e lint
Some problems can be fixed automatically. To fix these problems, run:
# Fix problems in all notebooks
tox -e fix
# Fix problems in a specific notebook
tox -e fix -- path/to/notebook
If you use the Jupyter notebook editor, consider adding squeaky as a pre-save hook. This will lint your notebooks as you save them, so you never need to worry about it.
We have two broken link checkers: for internal links and for external links.
To check internal links:
# Only check non-API docs
npm run check:internal-links
# You can add any of the below arguments to also check API docs.
npm run check:internal-links -- --current-apis --dev-apis --historical-apis --qiskit-legacy-release-notes
# Or, run all the checks. Although this only checks non-API docs.
npm run check
To check external links:
# Specify the files you want after `--`
npm run check:external-links -- docs/guides/index.md docs/guides/circuit-execution.mdx
# You can also use globs
npm run check:external-links -- 'docs/guides/*' '!docs/guides/index.mdx'
Every file should have a home in one of the _toc.json
files.
To check for orphaned pages, run:
# Only check non-API docs
npm run check:orphan-pages
# You can also check API docs
npm run check:orphan-pages -- --apis
# Or, run all the checks. However this will skip the API docs
npm run check
Every file needs to have a title
and description
, as explained in Page Metadata The lint
job in CI will fail with instructions for any bad file.
You can also check for valid metadata locally:
# Only check file metadata
npm run check:metadata
# By default, only the non-API docs are checked. You can add the
# below argument to also check API docs.
npm run check:metadata -- --apis
# Or, run all the checks. Although this only checks non-API docs.
npm run check
Every image needs to have alt text for accessibility and must use markdown syntax. To avoid changing the styling of the images, the use of the <img>
HTML tag is not allowed. The lint job in CI will fail if images do not have alt text defined or if an <img>
tag is found.
You can check it locally by running:
# Only check images
npm run check:images
# By default, only the non-API docs are checked. You can add the
# below argument to also check API docs.
npm run check:images -- --apis
# Or, run all the checks
npm run check
We use cSpell to check for spelling. The lint
job in CI will fail if there are spelling issues.
There are two ways to check spelling locally, rather than needing CI.
# Only check spelling
npm run check:spelling
# Or, run all the checks
npm run check
- Use the VSCode extension Code Spell Checker.
There are two ways to deal with cSpell incorrectly complaining about a word, such as abbreviations.
- Ignore the word in the local markdown file by adding a comment to the file, like below. The word is not case-sensitive, and the comment can be placed anywhere.
{/* cspell:ignore hellllooooo, ayyyyy */}
# Hellllooooo!
Ayyyyy, this is a fake description.
- If the word is a name, add it to the
scripts/config/cspell/dictionaries/people.txt
file. If it is a scientific or quantum specific word, add it to thescripts/config/cspell/dictionaries/qiskit.txt
file. If it doesn't fit in either category, add it to thewords
section inscripts/config/cspell/cSpell.json
. The word is not case-sensitive.
If the word appears in multiple files, prefer the second approach to add it to one of the dictionaries or cSpell.json
.
It's possible to write broken pages that crash when loaded. This is usually due to syntax errors.
To check that all the non-API docs render:
- Start up the local preview with
./start
by following the instructions at Preview the docs locally - In a new tab,
npm run check:pages-render -- --non-api
You can also check that API docs render by using any of these arguments: npm run check:pages-render -- --non-api --qiskit-release-notes --current-apis --dev-apis --historical-apis
. Warning that this is exponentially slower.
CI will check on every PR that any changed files render correctly. We also run a weekly cron job to check that every page renders correctly.
Run npm run fmt
to automatically format the README, .github
folder, and scripts/
folder. You should run this command if you get the error in CI run Prettier to fix
.
To check that formatting is valid without actually making changes, run npm run check:fmt
or npm run check
.
This is useful when we make improvements to the API generation script.
You can regenerate all API docs versions following these steps:
- Create a dedicated branch for the regeneration other than
main
usinggit checkout -b <branch-name>
. - Ensure there are no pending changes by running
git status
and creating a new commit for them if necessary. - Run
npm run regen-apis
to regenerate all API docs versions forqiskit
,qiskit-ibm-runtime
, andqiskit-ibm-transpiler
.
Each regenerated version will be saved as a distinct commit. If the changes are too large for one single PR, consider splitting it up into multiple PRs by using git cherry-pick
or git rebase -i
so each PR only has the commits it wants to target.
If you only want to regenerate the latest stable minor release of each package, then add --current-apis-only
as an argument, and in case you only want to regenerate versions of one package, then you can use the -p <pkg-name>
argument.
Alternatively, you can also regenerate one specific version:
- Choose which documentation you want to generate (
qiskit
,qiskit-ibm-runtime
, orqiskit-ibm-transpiler
) and its version. - Run
npm run gen-api -- -p <pkg-name> -v <version>
, e.g.npm run gen-api -- -p qiskit -v 0.45.0
If the version is not for the latest stable minor release series, then add --historical
to the arguments. For example, use --historical
if the latest stable release is 0.45.* but you're generating docs for the patch release 0.44.3.
Additionally, If you are regenerating a dev version, then you can add --dev
as an argument and the documentation will be built at /docs/api/<pkg-name>/dev
. For dev versions, end the --version
in -dev
, e.g. -v 1.0.0-dev
. If a release candidate has already been released, use -v 1.0.0rc1
, for example.
In this case, no commit will be automatically created.
This is useful when new docs content is published, usually corresponding to new releases or hotfixes for content issues. If you're generating a patch release, also see the below subsection for additional steps.
- Choose which documentation you want to generate (e.g.
qiskit
orqiskit-ibm-runtime
) and its version. - Determine the full version, such as by looking at https://github.com/Qiskit/qiskit/releases
- Download a CI artifact with the project's documentation. To find this:
- Pull up the CI runs for the stable commit that you want to build docs from. This should not be from a Pull Request
- Open up the "Details" for the relevant workflow.
- Qiskit: "Documentation / Build (push)"
- Runtime: "CI / Build documentation (push)"
- Click the "Summary" page at the top of the left navbar.
- Scroll down to "Artifacts" and look for the artifact related to documentation, such as
html_docs
. - Download the artifact by clicking on its name.
- Rename the downloaded zip file with its version number, e.g.
0.45.zip
for an artifact fromqiskit v0.45.2
. - Upload the renamed zip file to https://ibm.ent.box.com/folder/246867452622
- Share the file by clicking the
Copy shared link
button - Select
People with the link
and go toLink Settings
. - Under
Link Expiration
selectDisable Shared Link on
and set an expiration date of ~10 years into the future. - Copy the direct link at the end of the
Shared Link Settings
tab. - Modify the
scripts/config/api-html-artifacts.json
file, adding the new versions with the direct link from step 9. - Run
npm run gen-api -- -p <pkg-name> -v <version>
, e.g.npm run gen-api -- -p qiskit -v 0.45.0
. If it is not the latest minor version, set--historical
.
For dev docs, add --dev
and either use a version like -v 1.0.0-dev
or -v 1.0.0rc1
.
For example, the latest unversioned docs were 0.2.0
but 0.3.0
was just released.
You must first save the latest unversioned docs as historical docs by running npm run gen-api
with the --historical
arg. For example, first run npm run gen-api -- -p qiskit -v 0.2.0 --historical
.
Once the historical docs are set up, you can now generate the newest docs by following the normal process, such as npm run gen-api -- -p qiskit -v 0.3.0
.
For example, if the current docs are for 0.45.2 but you want to generate 0.45.3.
When uploading the artifact to Box, overwrite the existing file with the new one. No need to update the file metadata.
If the version is not for the latest stable minor release series, remember to add --historical
to the arguments. For example, use --historical
if the latest stable release is 0.3.* but you're generating docs for the patch release 0.2.1.
Since objects.inv
is compressed, we can't review changes through git diff
. Git does tell you if the file has changed, but this isn't that helpful as the compressed file can be different even if the uncompressed contents are the same.
If you want to see the diff for the uncompressed contents, first install sphobjinv
.
pipx install sphobjinv
The add the following to your .gitconfig
(usually found at ~/.gitconfig
).
[diff "objects_inv"]
textconv = sh -c 'sphobjinv convert plain "$0" -'
When a new version of an API is released, we should also update nb-tester/requirements.txt
to ensure that our notebooks still work with the latest version of the API. You can do this upgrade either manually or wait for Dependabot's automated PR.
Dependabot will fail to run at first due to not having access to the token. To fix this, have someone with write access trigger CI for the PR, such as by merging main or closing then reopening the issue.
You can land the API generation separately from the requirements.txt
version upgrade. It's high priority to get out new versions of the API docs ASAP, so you should not block that on the notebook version upgrade if you run into any complications like failing notebooks.
See the section "Syncing content with open source repo" in the internal docs repo's README.
Refer to our style guide for technical writing guidance.
We use MDX, which is like normal markdown but adds extensions for custom components we have.
Refer to the Common Markdown syntax for a primer on Markdown. The below guide focuses on the other features you can use when writing docs.
Choose which existing folder from docs/
your new page belongs to (probably guides
).
Next, choose the file name. The file name will determine the URL. For example, start/my-new-page.mdx
results in the URL start/my-new-page
. Choose a file name that will be stable over the page's lifespan and that is unlikely to clash with other topics. Use -
rather than _
as the delimiter. You can also ask for help choosing a name in the GitHub issue or pull request.
If your file will have non-trivial code in it, please create a Jupyter notebook ending in .ipynb
, rather than an MDX file. We prefer Jupyter notebooks when there is code because we have tests to make sure that the code still executes properly, whereas MDX is not tested.
Add the file to these places:
- The folder's
_toc.json
, such asguides/_toc.json
. Thetitle
will show up in the left side bar. Note that theurl
leaves off the file extension. If you want a "New" pill to appear next to the page in the side bar, add"isNew": true
to that page's entry. - The appropriate "index" page in the Development workflow section, such as
guides/map-problem-to-circuits
AND the Tools section in the_toc.json
file. Or, in the rare case that it doesn't belong on any of these pages, list it inscripts/js/commands/checkPatternsIndex.ts
in the ALLOWLIST_MISSING_FROM_INDEX or the ALLOWLIST_MISSING_FROM_TOC section. For example,"/guides/qiskit-code-assistant"
. - qiskit_bot.yaml. Everyone listed under the file name is notified any time the file is updated. If someone wants to be listed as an owner but does not want to receive notifications, put their ID in single quotes. For example, - "
@NoNotifications
"
Every page must set a title
and description
:
- The title is used for browser tabs and the top line of search results. It should usually match the title used in the
_toc.json
file. - The description should describe the page in at least 50 but no more than 160 characters, ideally using some keywords. The description is what shows up as the text in search results. See Qiskit#131 for some tips.
In MDX files, set the metadata at the top of the file like this:
---
title: Representing quantum computers
description: Learn about coupling maps, basis gates, and backend errors for transpiling
---
In Jupyter notebooks, set title
and description
in the metadata
section for the file. In VSCode, you can set up the metadata with these instructions:
- Open up the file with the "Open With..." option (one way to do this is to right-click the file name to find the "Open With..." option) and then "Text Editor".
- Scroll down to the bottom of the file for the top-level key "metadata". Ensure that this is the metadata for the entire file and not for a single code block. You should see in the "metadata" section other entries like "language_info" and "nbconvert_exporter".
- Add new keys in the "metadata" section for "title" and "description".
"metadata": {
"description": "Get started using Qiskit with IBM Quantum hardware in this Hello World example",
"title": "Hello world",
"celltoolbar": "Raw Cell Format",
"kernelspec": { ...
}
Internal URLs referring to other docs pages should start with /
and not include the file extension. For example:
[Qiskit SDK](/api/qiskit)
[Bit ordering in the Qiskit SDK](/guides/bit-ordering)
External URLs should use the entire URL, such as [GitHub](https://github.com)
.
Images are stored in the public/images
folder. You should use subfolders to organize the files. For example, images for guides/my-file.mdx
should be stored like public/images/guides/my-file/img1.png
.
To use the image:
![Alt text for the image](/images/guides/your-file/your_image.png)
To add an inline images:
Inline ![Alt text for the image](/images/guides/your-file/your_image.png) image
To include a caption:
![Alt text for the image](/images/guides/your-file/your_image.png "Image caption")
You can include a version of the image to be with the dark theme. You only need to create an image with the same name ending in @dark
. So for example, if you have a sampler.png
image, the dark version would be [email protected]
. This is important for images that have a white background.
Videos are stored in the public/videos
folder. You should use subfolders to organize the files. For example, images for guides/my-file.mdx
should be stored like public/videos/guides/my-file/video1.mp4
.
To add a video:
<video title="Write a description of the video here as 'alt text' for accessibility." className="max-w-auto h-auto" controls>
<source src="/videos/guides/sessions/demo.mp4" type="video/mp4"></source>
</video>
We use LaTeX to write math, which gets rendered by the library KaTeX.
Inline math expressions should start with $
and end with $
, e.g. $\frac{123}{2}$
.
Multi-line expressions should start with $$
and end with $$
:
$$
L = \frac{123}{2} \rho v^2 S C_1s
$$
Tables are supported: https://www.markdownguide.org/extended-syntax/.
Warning: do not use |
inside LaTeX/math expressions. Markdown will incorrectly interpret |
as the divider between cells. Instead, use \vert
.
Example comment: {/* Comes from https://qiskit.org/documentation/partners/qiskit_ibm_runtime/getting_started.html */}
For content that you don't want to show by default, use a collapsible section. The user will need to expand the section to read its contents. Refer to GitHub's guide on <details>
and <summary>
.
Footnote 1 link[^first].
Footnote 2 link[^second].
Duplicated footnote reference[^second].
[^first]: Footnote **can have markup**
and multiple paragraphs.
[^second]: Second footnote text.
These are components that we expose through MDX. You can use them in both
.mdx
and .ipynb
files. In Jupyter notebooks, use Markdown cells.
To use an Admonition
, use the following syntax
<Admonition type="note">This is an example of a note.</Admonition>
Available types are note, tip, info, caution, danger
. This is what they look like:
By default, the title is the type
capitalized. You can customize it by setting title
:
<Admonition type="note" title="Custom title">
This is a __note__ example
</Admonition>
We also have a specialized admonition for Qiskit Code Assistant prompt suggestions. Warning: avoid a trailing comma on the last entry in prompts
!
<CodeAssistantAdmonition
tagLine="Need help? Try asking Qiskit Code Assistant."
prompts={[
"# Print the version of Qiskit we're using",
"# Return True if the version of Qiskit is 1.0 or greater",
"# Install Qiskit 1.0.2"
]}
/>
To use a DefinitionTooltip
, use the following syntax:
<DefinitionTooltip definition="Definition for the Term">Term</DefinitionTooltip>
For full list of props, please check here.
Warning: do not use LaTeX/math expressions in the same paragraph as a definition tooltip because it will break the styling. Use a new line to separate out the two into separate paragraphs.
To use a Tabs
component, use the following syntax:
<Tabs>
<TabItem value="pulses" label="Pulses">
This is the text for pulses
</TabItem>
<TabItem value="qasm" label="QASM">
This is the text for QASM
</TabItem>
</Tabs>
By default, the first tab is selected. You can change that by using the defaultValue
prop.
<Tabs defaultValue="qasm">
<TabItem value="pulses" label="Pulses">
This is the text for pulses
</TabItem>
<TabItem value="qasm" label="QASM">
This is the text for QASM
</TabItem>
</Tabs>
There are situations where you want to repeat the same tabs in several part of the page. In this situation, you can use the prop group
to synchronize the selected tab in all usages.
<Tabs group="my-group">
<TabItem value="pulses" label="Pulses">
This is the text for pulses
</TabItem>
<TabItem value="qasm" label="QASM">
This is the text for QASM
</TabItem>
</Tabs>
There is a specific use case where you want to show instructions for different operating systems. In this situation, you can replace the Tabs
component by a OperatingSystemTabs
. The default value of the tab will be selected based on the user's operating system.
<OperatingSystemTabs>
<TabItem value="mac" label="macOS">
Open a terminal and write the command
</TabItem>
<TabItem value="linux" label="Linux">
Open a terminal and write the command
</TabItem>
<TabItem value="win" label="Windows">
Go to windows/run and write `cmd`. It will open a command line. Execute this
command
</TabItem>
</OperatingSystemTabs>
This component only works in notebooks. Notebook code cells are always at the top-level of content, but sometimes you'll want to have them nested in other components, such as in tabs or in a list. While you could write your code as a markdown block, it's usually preferable to keep the code as a code block so that it is executed and its code can be later used in the notebook. The CodeCellPlaceholder component allows you to still use a code block, but move it to render somewhere else in the notebook.
To use this component, add a tag
starting with id-
to the code cell you'd like to move, then add a
<CodeCellPlaceholder tag="id-tag" />
component with the same tag somewhere in
your markdown. This will move that code cell into the place of the component.
You can then use this component anywhere in your markdown. While you can move code cells anywhere, try to keep them relatively close to their position in the notebook and preserve their order to avoid confusion.
Here's an example of what this might look like in your notebook source.
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"tags": [
"id-example-cell"
]
},
"outputs": [
{
"data": {
"text/plain": [
"Hello, world!"
]
},
}
],
"source": [
"# This is a code cell\n",
"print(\"Hello, world!\")"
]
},
{
"cell_type": "markdown",
"source": [
"This is a notebook markdown cell.",
"\n",
"<Tabs>\n",
"<TabItem value=\"Example\" label=\"Example\">\n",
" This `TabItem` contains a notebook code cell\n",
" <CodeCellPlaceholder tag=\"id-example-cell\" />\n",
"</TabItem>\n",
"</Tabs>"
]
}
All information needs to identify, mark, and attribute IBM and applicable third-party trademarks. We do this the first time an IBM trademark appears on each page. See the Copyright and trademark information page for more details.
Some companies require a special attribution notice. View a list of the companies to include in a special attribution notice at the Special attributions section of the IBM Legal site.
A (non-exhaustive) list of trademarked names found in our docs:
- IBM®
- IBM Cloud®
- IBM Quantum™
Note: Although Qiskit is a registered trademark of IBM, we do not mark it as such.
See the Usage section of the IBM Quantum Experience Guide for guidance on when to use IBM and when to use IBM Quantum.
To create the symbols in markdown:
Use ®
to get ® for registered trademarks.
use ™
to get ™ for nonregistered trademarks.