Skip to content

TomPenguin/notion-md-crawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

57 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

notion-md-crawler

A library to recursively retrieve and serialize Notion pages and databases with customization for machine learning applications.

NPM Version

🌟 Features

  • πŸ•·οΈ Crawling Pages and Databases: Dig deep into Notion's hierarchical structure with ease.
  • πŸ“ Serialize to Markdown: Seamlessly convert Notion pages to Markdown for easy use in machine learning and other.
  • πŸ› οΈ Custom Serialization: Adapt the serialization process to fit your specific machine learning needs.
  • ⏳ Async Generator: Yields results on a page-by-page basis, so even huge documents can be made memory efficient.

πŸ› οΈ Installation

@notionhq/client must also be installed.

Using npm πŸ“¦:

npm install notion-md-crawler @notionhq/client

Using yarn 🧢:

yarn add notion-md-crawler @notionhq/client

Using pnpm πŸš€:

pnpm add notion-md-crawler @notionhq/client

πŸš€ Quick Start

⚠️ Note: Before getting started, create an integration and find the token. Details on methods can be found in API section

Leveraging the power of JavaScript generators, this library is engineered to handle even the most extensive Notion documents with ease. It's designed to yield results page-by-page, allowing for efficient memory usage and real-time processing.

import { Client } from "@notionhq/client";
import { crawler, pageToString } from "notion-md-crawler";

// Need init notion client with credential.
const client = new Client({ auth: process.env.NOTION_API_KEY });

const crawl = crawler({ client });

const main = async () => {
  const rootPageId = "****";
  for await (const result of crawl(rootPageId)) {
    if (result.success) {
      const pageText = pageToString(result.page);
      console.log(pageText);
    }
  }
};

main();

🌐 API

crawler

Recursively crawl the Notion Page. dbCrawler should be used if the Root is a Notion Database.

Note: It tries to continue crawling as much as possible even if it fails to retrieve a particular Notion Page.

Parameters:

  • options (CrawlerOptions): Crawler options.
  • rootPageId (string): Id of the root page to be crawled.

Returns:

  • AsyncGenerator<CrawlingResult>: Crawling results with failed information.

dbCrawler

Recursively crawl the Notion Database. crawler should be used if the Root is a Notion Page.

Parameters:

  • options (CrawlerOptions): Crawler options.
  • rootDatabaseId (string): Id of the root page to be crawled.

Returns:

  • AsyncGenerator<CrawlingResult>: Crawling results with failed information.

CrawlerOptions

Option Description Type Default
client Instance of Notion Client. Set up an instance of the Client class imported from @notionhq/client. Notion Client -
serializers? Used for custom serialization of Block and Property objects. Object undefined
serializers?.block? Map of Notion block type and BlockSerializer. BlockSerializers undefined
serializers?.property? Map of Notion Property Type and PropertySerializer. PropertySerializers undefined
metadataBuilder? The metadata generation process can be customize. MetadataBuilder undefined
urlMask? If specified, the url is masked with the string. string | false false
skipPageIds? List of page Ids to skip crawling (also skips descendant pages) string[] undefined

BlockSerializers

Map with Notion block type (like "heading_1", "to_do", "code") as key and BlockSerializer as value.

BlockSerializer

BlockSerializer that takes a Notion block object as argument. Returning false will skip serialization of that Notion block.

[Type]

type BlockSerializer = (
  block: NotionBlock,
) => string | false | Promise<string | false>;

PropertySerializers

Map with Notion Property Type (like "heading_1", "to_do", "code") as key and PropertySerializer as value.

PropertySerializer

PropertySerializer that takes a Notion property object as argument. Returning false will skip serialization of that Notion property.

[Type]

type PropertySerializer = (
  name: string,
  block: NotionBlock,
) => string | false | Promise<string | false>;

MetadataBuilder

Retrieving metadata is sometimes very important, but the information you want to retrieve will vary depending on the context. MetadataBuilder allows you to customize it according to your use case.

[Example]

import { crawler, MetadataBuilderParams } from "notion-md-crawler";

const getUrl = (id: string) => `https://www.notion.so/${id.replace(/-/g, "")}`;

const metadataBuilder = ({ page }: MetadataBuilderParams) => ({
  url: getUrl(page.metadata.id),
});

const crawl = crawler({ client, metadataBuilder });

for await (const result of crawl("notion-page-id")) {
  if (result.success) {
    console.log(result.page.metadata.url); // "https://www.notion.so/********"
  }
}

πŸ“Š Use Metadata

Since crawler returns Page objects and Page object contain metadata, you can be used it for machine learning.

πŸ› οΈ Custom Serialization

notion-md-crawler gives you the flexibility to customize the serialization logic for various Notion objects to cater to the unique requirements of your machine learning model or any other use case.

Define your custom serializer

You can define your own custom serializer. You can also use the utility function for convenience.

import { BlockSerializer, crawler, serializer } from "notion-md-crawler";

const customEmbedSerializer: BlockSerializer<"embed"> = (block) => {
  if (block.embed.url) return "";

  // You can use serializer utility.
  const caption = serializer.utils.fromRichText(block.embed.caption);

  return `<figure>
  <iframe src="${block.embed.url}"></iframe>
  <figcaption>${caption}</figcaption>
</figure>`;
};

const serializers = {
  block: {
    embed: customEmbedSerializer,
  },
};

const crawl = crawler({ client, serializers });

Skip serialize

Returning false in the serializer allows you to skip the serialize of that block. This is useful when you want to omit unnecessary information.

const image: BlockSerializer<"image"> = () => false;
const crawl = crawler({ client, serializers: { block: { image } } });

Advanced: Use default serializer in custom serializer

If you want to customize serialization only in specific cases, you can use the default serializer in a custom serializer.

import { BlockSerializer, crawler, serializer } from "notion-md-crawler";

const defaultImageSerializer = serializer.block.defaults.image;

const customImageSerializer: BlockSerializer<"image"> = (block) => {
  // Utility function to retrieve the link
  const { title, href } = serializer.utils.fromLink(block.image);

  // If the image is from a specific domain, wrap it in a special div
  if (href.includes("special-domain.com")) {
    return `<div class="special-image">
      ${defaultImageSerializer(block)}
    </div>`;
  }

  // Use the default serializer for all other images
  return defaultImageSerializer(block);
};

const serializers = {
  block: {
    image: customImageSerializer,
  },
};

const crawl = crawler({ client, serializers });

πŸ” Supported Blocks and Database properties

Blocks

Block Type Supported
Text βœ… Yes
Bookmark βœ… Yes
Bulleted List βœ… Yes
Numbered List βœ… Yes
Heading 1 βœ… Yes
Heading 2 βœ… Yes
Heading 3 βœ… Yes
Quote βœ… Yes
Callout βœ… Yes
Equation (block) βœ… Yes
Equation (inline) βœ… Yes
Todos (checkboxes) βœ… Yes
Table Of Contents βœ… Yes
Divider βœ… Yes
Column βœ… Yes
Column List βœ… Yes
Toggle βœ… Yes
Image βœ… Yes
Embed βœ… Yes
Video βœ… Yes
Figma βœ… Yes
PDF βœ… Yes
Audio βœ… Yes
File βœ… Yes
Link βœ… Yes
Page Link βœ… Yes
External Page Link βœ… Yes
Code (block) βœ… Yes
Code (inline) βœ… Yes

Database Properties

Property Type Supported
Checkbox βœ… Yes
Created By βœ… Yes
Created Time βœ… Yes
Date βœ… Yes
Email βœ… Yes
Files βœ… Yes
Formula βœ… Yes
Last Edited By βœ… Yes
Last Edited Time βœ… Yes
Multi Select βœ… Yes
Number βœ… Yes
People βœ… Yes
Phone Number βœ… Yes
Relation βœ… Yes
Rich Text βœ… Yes
Rollup βœ… Yes
Select βœ… Yes
Status βœ… Yes
Title βœ… Yes
Unique Id βœ… Yes
Url βœ… Yes
Verification β–‘ No

πŸ’¬ Issues and Feedback

For any issues, feedback, or feature requests, please file an issue on GitHub.

πŸ“œ License

MIT


Made with ❀️ by TomPenguin.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published