This file lists all available configuration options and their descriptions.
Each extractor is identified by its category and subcategory. The category is the lowercase site name without any spaces or special characters, which is usually just the module name (pixiv, danbooru, ...). The subcategory is a lowercase word describing the general functionality of that extractor (user, favorite, manga, ...).
Each one of the following options can be specified on multiple levels of the configuration tree:
Base level: | extractor.<option-name> |
Category level: | extractor.<category>.<option-name> |
Subcategory level: | extractor.<category>.<subcategory>.<option-name> |
A value in a "deeper" level hereby overrides a value of the same name on a lower level. Setting the extractor.pixiv.filename value, for example, lets you specify a general filename pattern for all the different pixiv extractors. Using the extractor.pixiv.user.filename value lets you override this general pattern specifically for PixivUserExtractor instances.
The category and subcategory of all extractors are included in the output of gallery-dl --list-extractors. For a specific URL these values can also be determined by using the -K/--list-keywords command-line option (see the example below).
"{manga}_c{chapter}_{page:>03}.{extension}"
{ "extension == 'mp4'": "{id}_video.{extension}", "'nature' in title" : "{id}_{title}.{extension}", "" : "{id}_default.{extension}" }
A format string to build filenames for downloaded files with.
If this is an object, it must contain Python expressions mapping to the filename format strings to use. These expressions are evaluated in the order as specified in Python 3.6+ and in an undetermined order in Python 3.4 and 3.5.
The available replacement keys depend on the extractor used. A list of keys for a specific one can be acquired by calling gallery-dl with the -K/--list-keywords command-line option. For example:
$ gallery-dl -K http://seiga.nicovideo.jp/seiga/im5977527 Keywords for directory names: ----------------------------- category seiga subcategory image Keywords for filenames: ----------------------- category seiga extension None image-id 5977527 subcategory image
Note: Even if the value of the extension key is missing or None, it will be filled in later when the file download is starting. This key is therefore always available to provide a valid filename extension.
["{category}", "{manga}", "c{chapter} - {title}"]
{ "'nature' in content": ["Nature Pictures"], "retweet_id != 0" : ["{category}", "{user[name]}", "Retweets"], "" : ["{category}", "{user[name]}"] }
A list of format strings to build target directory paths with.
If this is an object, it must contain Python expressions mapping to the list of format strings to use.
Each individual string in such a list represents a single path segment, which will be joined together and appended to the base-directory to form the complete target directory path.
If true, overwrite any metadata provided by a child extractor with its parent's.
{ "id": "child-id", "_p_": {"id": "parent-id"} }
Special values:
Implementation Detail: For strings with length >= 2, this option uses a Regular Expression Character Set, meaning that:
Set of characters to remove from generated path names.
Note: In a string with 2 or more characters, []^-\ need to be escaped with backslashes, e.g. "\\[\\]"
Set of characters to remove from the end of generated path segment names using str.rstrip()
Special values:
{ "jpeg": "jpg", "jpe" : "jpg", "jfif": "jpg", "jif" : "jpg", "jfi" : "jpg" }
Controls the behavior when downloading files that have been downloaded before, i.e. a file with the same filename already exists or its ID is in a download archive.
The username and password to use when attempting to log in to another site.
Specifying username and password is required for
and optional for
These values can also be specified via the -u/--username and -p/--password command-line options or by using a .netrc file. (see Authentication)
(*) The password value for these sites should be the API key found in your user profile, not the actual account password.
Note: Leave the password value empty or undefined to get prompted for a passeword when performing a login (see getpass()).
Source to read additional cookies from. This can be
The Path to a Mozilla/Netscape format cookies.txt file
"~/.local/share/cookies-instagram-com.txt"
An object specifying cookies as name-value pairs
{ "cookie-name": "cookie-value", "sessionid" : "14313336321%3AsabDFvuASDnlpb%3A31", "isAdult" : "1" }
A list with up to 5 entries specifying a browser profile.
["firefox"] ["firefox", null, null, "Personal"] ["chromium", "Private", "kwallet", null, ".twitter.com"]
Export session cookies in cookies.txt format.
"http://10.10.1.10:3128"
{ "http" : "http://10.10.1.10:3128", "https": "http://10.10.1.10:1080", "http://10.20.1.128": "http://10.10.1.10:5323" }
Proxy (or proxies) to be used for remote connections.
Note: If a proxy URLs does not include a scheme, http:// is assumed.
Client-side IP address to bind to.
User-Agent header value to be used for HTTP requests.
Setting this value to "browser" will try to automatically detect and use the User-Agent used by the system's default browser.
Note: This option has no effect on pixiv, e621, and mangadex extractors, as these need specific values to function correctly.
Try to emulate a real browser (firefox or chrome) by using their default HTTP headers and TLS ciphers for HTTP requests.
Optionally, the operating system used in the User-Agent header can be specified after a : (windows, linux, or macos).
Note: requests and urllib3 only support HTTP/1.1, while a real browser would use HTTP/2.
Send Referer headers with all outgoing HTTP requests.
If this is a string, send it as Referer instead of the extractor's root domain.
{ "User-Agent" : "<extractor.*.user-agent>", "Accept" : "*/*", "Accept-Language": "en-US,en;q=0.5", "Accept-Encoding": "gzip, deflate", "Referer" : "<extractor.*.referer>" }
Additional HTTP headers to be sent with each HTTP request,
To disable sending a header, set its value to null.
["ECDHE-ECDSA-AES128-GCM-SHA256", "ECDHE-RSA-AES128-GCM-SHA256", "ECDHE-ECDSA-CHACHA20-POLY1305", "ECDHE-RSA-CHACHA20-POLY1305"]
Allow selecting TLS 1.2 cipher suites.
Can be disabled to alter TLS fingerprints and potentially bypass Cloudflare blocks.
Insert a file's download URL into its metadata dictionary as the given name.
For example, setting this option to "gdl_file_url" will cause a new metadata field with name gdl_file_url to appear, which contains the current file's download URL. This can then be used in filenames, with a metadata post processor, etc.
Insert a reference to the current PathFormat data structure into metadata dictionaries as the given name.
For example, setting this option to "gdl_path" would make it possible to access the current file's filename as "{gdl_path.filename}".
Insert an object containing a file's HTTP headers and filename, extension, and date parsed from them into metadata dictionaries as the given name.
For example, setting this option to "gdl_http" would make it possible to access the current file's Last-Modified header as "{gdl_http[Last-Modified]}" and its parsed form as "{gdl_http[date]}".
Insert an object containing gallery-dl's version info into metadata dictionaries as the given name.
The content of the object is as follows:
{ "version" : "string", "is_executable" : "bool", "current_git_head": "string or null" }
A list of extractor identifiers to ignore (or allow) when spawning child extractors for unknown URLs, e.g. from reddit or plurk.
Each identifier can be
A category or basecategory name ("imgur", "mastodon")
Note: Any blacklist setting will automatically include "oauth", "recursive", and "test".
File to store IDs of downloaded files in. Downloads of files already recorded in this archive file will be skipped.
The resulting archive file is not a plain text file but an SQLite3 database, as either lookup operations are significantly faster or memory requirements are significantly lower when the amount of stored IDs gets reasonably large.
Note: Archive files that do not already exist get generated automatically.
Note: Archive paths support regular format string replacements, but be aware that using external inputs for building local paths may pose a security risk.
A list of SQLite PRAGMA statements to run during archive initialization.
See https://www.sqlite.org/pragma.html for available PRAGMA statements and further details.
[ { "name": "zip" , "compression": "store" }, { "name": "exec", "command": ["/home/foobar/script", "{category}", "{image_id}"] } ]
A list of post processors to be applied to each downloaded file in the specified order.
For example
will run all three post processors - mtime, zip, exec - for each downloaded pixiv file.
{ "archive": null, "keep-files": true }
Additional HTTP response status codes to retry an HTTP request on.
2xx codes (success responses) and 3xx codes (redirection messages) will never be retried and always count as success, regardless of this option.
5xx codes (server error responses) will always be retried, regardless of this option.
Amount of time (in seconds) to wait for a successful connection and response from a remote server.
This value gets internally used as the timeout parameter for the requests.request() method.
Controls whether to verify SSL/TLS certificates for HTTPS requests.
If this is a string, it must be the path to a CA bundle to use instead of the default certificates.
This value gets internally used as the verify parameter for the requests.request() method.
Controls whether to download media files.
Setting this to false won't download any files, but all other functions (postprocessors, download archive, etc.) will be executed as normal.
Index range(s) selecting which files to download.
These can be specified as
Note: The index of the first file is 1.
Python expression controlling which files to download.
A file only gets downloaded when all of the given expressions evaluate to True.
Available values are the filename-specific ones listed by -K or -j.
Format string used to parse string values of date-min and date-max.
See strftime() and strptime() Behavior for a list of formatting directives.
Note: Despite its name, this option does not control how {date} metadata fields are formatted. To use a different formatting for those values other than the default %Y-%m-%d %H:%M:%S, put strftime() and strptime() Behavior formatting directives after a colon :, for example {date:%Y%m%d}.
During data extraction, write received HTTP request data to enumerated files in the current working directory.
Special values:
Controls the post extraction strategy.
Specifies the requested image width.
This value must be divisble by 16 and gets rounded down otherwise. The maximum possible value appears to be 1920.
Selects which gallery modules to download from.
Supported module types are image, video, mediacollection, embed, text.
Specifies the domain used by cyberdrop regardless of input URL.
Setting this option to "auto" uses the same domain as a given input URL.
Controls the download target for Ugoira posts.
Extract additional metadata (notes, artist commentary, parent, children, uploader)
It is possible to specify a custom list of metadata includes. See available_includes for possible field names. aibooru also supports ai_metadata.
Note: This requires 1 additional HTTP request per 200-post batch.
Stop paginating over API results if the length of a batch of returned posts is less than the specified number. Defaults to the per-page limit of the current instance, which is 200.
Note: Changing this setting is normally not necessary. When the value is greater than the per-page limit, gallery-dl will stop after the first batch. The value cannot be less than 1.
The content filter ID to use.
Setting an explicit filter ID overrides any default filters and can be used to access 18+ content without API Key.
See Filters for details.
Download extra Sta.sh resources from description texts and journals.
Note: Enabling this option also enables deviantart.metadata.
Select the directory structure created by the Gallery- and Favorite-Extractors.
true: Use a flat directory structure.
false: Collect a list of all gallery-folders or favorites-collections and transfer any further work to other extractors (folder or collection), which will then create individual subdirectories for each of them.
Note: Going through all gallery folders will not be able to fetch deviations which aren't in any folder.
Provide a folders metadata field that contains the names of all folders a deviation is present in.
Note: Gathering this information requires a lot of API calls. Use with caution.
Check whether the profile name in a given URL belongs to a group or a regular user.
When disabled, assume every given profile name belongs to a regular user.
Special values:
A (comma-separated) list of subcategories to include when processing a user profile.
Possible values are "avatar", "background", "gallery", "scraps", "journal", "favorite", "status".
It is possible to use "all" instead of listing all values separately.
Selects the output format for textual content. This includes journals, literature and status updates.
Update JSON Web Tokens (the token URL parameter) of otherwise non-downloadable, low-resolution images to be able to download them in full resolution.
Note: No longer functional as of 2023-10-11
Enable mature content.
This option simply sets the mature_content parameter for API calls to either "true" or "false" and does not do any other form of content filtering.
Download original files if available.
Setting this option to "images" only downloads original files if they are images and falls back to preview versions for everything else (archives, etc.).
Controls when to stop paginating over API results.
Use a public access token for API requests.
Disable this option to force using a private token for all requests when a refresh token is provided.
JPEG quality level of newer images for which an original file download is not available.
Note: Only has an effect when deviantart.jwt is disabled.
The refresh-token value you get from linking your DeviantArt account to gallery-dl.
Using a refresh-token allows you to access private or otherwise not publicly available deviations.
Note: The refresh-token becomes invalid after 3 months or whenever your cache file is deleted or cleared.
Avatar URL formats to return.
Extract additional metadata (notes, pool metadata) if available.
Note: This requires 0-2 additional HTTP requests per post.
Stop paginating over API results if the length of a batch of returned posts is less than the specified number. Defaults to the per-page limit of the current instance, which is 320.
Note: Changing this setting is normally not necessary. When the value is greater than the per-page limit, gallery-dl will stop after the first batch. The value cannot be less than 1.
After downloading a gallery, add it to your account's favorites as the given category number.
Note: Set this to "favdel" to remove galleries from your favorites.
Note: This will remove any Favorite Notes when applied to already favorited galleries.
Selects how to handle "you do not have enough GP" errors.
Load extended gallery metadata from the API.
Adds archiver_key, posted, and torrents. Makes date and filesize more precise.
Selects an alternative source to download files from.
Control behavior on embedded content from external sites.
Fetch exif and camera metadata for each photo.
Note: This requires 1 additional API call per photo.
Extract additional metadata (license, date_taken, original_format, last_update, geo, machine_tags, o_dims)
It is possible to specify a custom list of metadata includes. See the extras parameter in Flickr API docs for possible field names.
Sets the maximum allowed size for downloaded images.
Controls the format of description metadata fields.
A (comma-separated) list of subcategories to include when processing a user profile.
Possible values are "gallery", "scraps", "favorite".
It is possible to use "all" instead of listing all values separately.
Selects which site layout to expect when parsing posts.
API token value found at the bottom of your profile page.
If not set, a temporary guest token will be used.
API token value used during API requests.
An invalid or not up-to-date value will result in 401 Unauthorized errors.
Keeping this option unset will use an extra HTTP request to attempt to fetch the current value used by gofile.
A (comma-separated) list of subcategories to include when processing a user profile.
Possible values are "pictures", "scraps", "stories", "favorite".
It is possible to use "all" instead of listing all values separately.
Selects which image format to download.
Available formats are "webp" and "avif".
"original" will try to download the original jpg or png versions, but is most likely going to fail with 403 Forbidden errors.
Your personal Image Chest access token.
These tokens allow using the API instead of having to scrape HTML pages, providing more detailed metadata. (date, description, etc)
See https://imgchest.com/docs/api/1.0/general/authorization for instructions on how to generate such a token.
Controls whether to choose the GIF or MP4 version of an animation.
Value of the orderby parameter for submission searches.
(See API#Search for details)
Selects which API endpoints to use.
A (comma-separated) list of subcategories to include when processing a user profile.
Possible values are "posts", "reels", "tagged", "stories", "highlights", "avatar".
It is possible to use "all" instead of listing all values separately.
Provide extended user metadata even when referring to a user by ID, e.g. instagram.com/id:12345678.
Note: This metadata is always available when referring to a user by name, e.g. instagram.com/USERNAME.
Controls the order in which files of each post are returned.
Note: This option does not affect {num}. To enumerate files in reverse order, use count - num + 1.
Controls the order in which posts are returned.
Note: This option only affects highlights.
Extract comments metadata.
Note: This requires 1 additional HTTP request per post.
Controls how to handle duplicate files in a post.
Determines the type of favorites to be downloaded.
Available types are artist, and post.
Determines the type and order of files to be downloaded.
Available types are file, attachments, and inline.
Extract post revisions.
Note: This requires 1 additional HTTP request per post.
The name of the preferred file format to download.
Use "all" to download all available formats, or a (comma-separated) list to select multiple formats.
If the selected format is not available, the first in the list gets chosen (usually mp3).
Specifies the domain used by a lolisafe extractor regardless of input URL.
Setting this option to "auto" uses the same domain as a given input URL.
Format in which to download animated images.
Use true to download animated images as gifs and false to download as mp4 videos.
Additional query parameters to send when fetching manga chapters.
(See /manga/{id}/feed and /user/follows/manga/feed)
Select chapter source and language for a manga.
Specifying the numeric ID of a source is also supported.
The access-token value you get from linking your account to gallery-dl.
Note: gallery-dl comes with built-in tokens for mastodon.social, pawoo and baraag. For other instances, you need to obtain an access-token in order to use usernames in place of numerical user IDs.
Extract extended pool metadata.
Note: Not supported by all moebooru instances.
Selects the preferred format for video downloads.
If the selected format is not available, the next smaller one gets chosen.
A (comma-separated) list of subcategories to include when processing a user profile.
Possible values are "art", "audio", "games", "movies".
It is possible to use "all" instead of listing all values separately.
A (comma-separated) list of subcategories to include when processing a user profile.
Possible values are "illustration", "doujin", "favorite", "nuita".
It is possible to use "all" instead of listing all values separately.
Control video download behavior.
Controls how a user is directed to an OAuth authorization page.
Port number to listen on during OAuth authorization.
Note: All redirects will go to port 6414, regardless of the port specified here. You'll have to manually adjust the port number in your browser's address bar when using a different port than the default.
Extract additional metadata (source, uploader)
Note: This requires 1 additional HTTP request per post.
Determines the type and order of files to be downloaded.
Available types are postfile, images, image_large, attachments, and content.
Specifies the domain used by pinterest extractors.
Setting this option to "auto" uses the same domain as a given input URL.
A (comma-separated) list of subcategories to include when processing a user profile.
Possible values are "artworks", "avatar", "background", "favorite", "novel-user", "novel-bookmark".
It is possible to use "all" instead of listing all values separately.
For works bookmarked by your own account, fetch bookmark tags as tags_bookmark metadata.
Note: This requires 1 additional API call per bookmarked post.
Controls the tags metadata field.
Download Pixiv's Ugoira animations or ignore them.
These animations come as a .zip file containing all animation frames in JPEG format.
Use an ugoira post processor to convert them to watchable videos. (Example)
Format in which to download animated images.
Use true to download animated images as gifs and false to download as mp4 videos.
Controls how to handle redirects to CAPTCHA pages.
Sets the quality query parameter of issue pages. ("lq" or "hq")
"auto" uses the quality parameter of the input URL or "hq" if not present.
The value of the limit parameter when loading a submission and its comments. This number (roughly) specifies the total amount of comments being retrieved with the first API call.
Reddit's internal default and maximum values for this parameter appear to be 200 and 500 respectively.
The value 0 ignores all comments and significantly reduces the time required when scanning a subreddit.
Retrieve additional comments by resolving the more comment stubs in the base comment tree.
Note: This requires 1 additional API call for every 100 extra comments.
Reddit extractors can recursively visit other submissions linked to in the initial set of submissions. This value sets the maximum recursion depth.
Special values:
The refresh-token value you get from linking your Reddit account to gallery-dl.
Using a refresh-token allows you to access private or otherwise not publicly available subreddits, given that your account is authorized to do so, but requests to the reddit API are going to be rate limited at 600 requests every 10 minutes/600 seconds.
Control video download behavior.
(*) This saves 1 HTTP request per video and might potentially be able to download otherwise deleted videos, but it will not always get the best video quality available.
List of names of the preferred animation format, which can be "hd", "sd", "gif", "thumbnail", "vthumbnail", or "poster".
If a selected format is not available, the next one in the list will be tried until an available format is found.
If the format is given as string, it will be extended with ["hd", "sd", "gif"]. Use a list with one element to restrict it to only one possible format.
Only include assets that are in the specified dimensions. all can be used to specify all dimensions. Valid values are:
Only include assets that are in the specified file types. all can be used to specifiy all file types. Valid values are:
Set the chosen sorting method when downloading from a list of assets. Can be one of:
Only include assets that are in the specified styles. all can be used to specify all styles. Valid values are:
Username and login token of your account to access private resources.
To generate a token, visit /user/USERNAME/list-tokens and click Create Token.
Custom offset starting value when paginating over blog posts.
Allows skipping over posts without having to waste API calls.
Download full-resolution photo and inline images.
For each photo with "maximum" resolution (width equal to 2048 or height equal to 3072) or each inline image, use an extra HTTP request to find the URL to its full-resolution version.
Selects how to handle exceeding the daily API rate limit.
A (comma-separated) list of post types to extract images, etc. from.
Possible types are text, quote, link, answer, video, audio, photo, chat.
It is possible to use "all" instead of listing all types separately.
The content filter ID to use.
Setting an explicit filter ID overrides any default filters and can be used to access 18+ content without API Key.
See Filters for details.
Controls how to handle Twitter Cards.
List of card types to ignore.
Possible values are
For input URLs pointing to a single Tweet, e.g. https://twitter.com/i/web/status/<TweetID>, fetch media from all Tweets and replies in this conversation.
If this option is equal to "accessible", only download from conversation Tweets if the given initial Tweet is accessible.
Controls how to handle Cross Site Request Forgery (CSRF) tokens.
For each Tweet, return all Tweets from that initial Tweet's conversation or thread, i.e. expand all Twitter threads.
Going through a timeline with this option enabled is essentially the same as running gallery-dl https://twitter.com/i/web/status/<TweetID> with enabled conversations option for each Tweet in said timeline.
Note: This requires at least 1 additional API call per initial Tweet.
A (comma-separated) list of subcategories to include when processing a user profile.
Possible values are "avatar", "background", "timeline", "tweets", "media", "replies", "likes".
It is possible to use "all" instead of listing all values separately.
Selects the API endpoint used to retrieve single Tweets.
The image version to download. Any entries after the first one will be used for potential fallback URLs.
Known available sizes are 4096x4096, orig, large, medium, and small.
Fetch media from quoted Tweets.
If this option is enabled, gallery-dl will try to fetch a quoted (original) Tweet when it sees the Tweet which quotes it.
Selects how to handle exceeding the API rate limit.
Fetch media from replies to other Tweets.
If this value is "self", only consider replies where reply and original Tweet are from the same user.
Note: Twitter will automatically expand conversations if you use the /with_replies timeline while logged in. For example, media from Tweets which the user replied to will also be downloaded.
It is possible to exclude unwanted Tweets using image-filter.
Fetch media from Retweets.
If this value is "original", metadata for these files will be taken from the original Tweets, not the Retweets.
Controls the strategy / tweet source used for timeline URLs (https://twitter.com/USER/timeline).
Also emit metadata for text-only Tweets without media content.
This only has an effect with a metadata (or exec) post processor with "event": "post" and appropriate filename.
Special values:
Note: To allow gallery-dl to follow custom URL formats, set the blacklist for twitter to a non-default value, e.g. an empty string "".
Control video download behavior.
Name of the image format to download.
Available formats are "raw", "full", "regular", "small", and "thumb".
Your Wallhaven API Key, to use your account's browsing settings and default filters when searching.
See https://wallhaven.cc/help/api for more information.
A (comma-separated) list of subcategories to include when processing a user profile.
Possible values are "uploads", "collections".
It is possible to use "all" instead of listing all values separately.
Extract additional metadata (tags, uploader)
Note: This requires 1 additional HTTP request per post.
Note: This requires 1 additional HTTP request per submission.
A (comma-separated) list of subcategories to include when processing a user profile.
Possible values are "home", "feed", "videos", "newvideo", "article", "album".
It is possible to use "all" instead of listing all values separately.
Fetch media from retweeted posts.
If this value is "original", metadata for these files will be taken from the original posts, not the retweeted posts.
Controls the use of youtube-dl's generic extractor.
Set this option to "force" for the same effect as youtube-dl's --force-generic-extractor.
Route youtube-dl's output through gallery-dl's logging system. Otherwise youtube-dl will write its output directly to stdout/stderr.
Note: Set quiet and no_warnings in extractor.ytdl.raw-options to true to suppress all output.
Name of the youtube-dl Python module to import.
Setting this to null will try to import "yt_dlp" followed by "youtube_dl" as fallback.
{ "quiet": true, "writesubtitles": true, "merge_output_format": "mkv" }
Additional options passed directly to the YoutubeDL constructor.
All available options can be found in youtube-dl's docstrings.
Extract additional metadata (date, md5, tags, ...)
Note: This requires 1-2 additional HTTP requests per post.
Categorize tags by their respective types and provide them as tags_<type> metadata fields.
Note: This requires 1 additional HTTP request per post.
Extract overlay notes (position and text).
Note: This requires 1 additional HTTP request per post.
Reverse the order of chapter URLs extracted from manga pages.
Minimum/Maximum allowed file size in bytes. Any file smaller/larger than this limit will not be downloaded.
Possible values are valid integer or floating-point numbers optionally followed by one of k, m. g, t, or p. These suffixes are case-insensitive.
Controls the use of .part files during file downloads.
Alternate location for .part files.
Missing directories will be created as needed. If this value is null, .part files are going to be stored alongside the actual output files.
Number of seconds until a download progress indicator for the current download is displayed.
Set this option to null to disable this indicator.
Maximum download rate in bytes per second.
Possible values are valid integer or floating-point numbers optionally followed by one of k, m. g, t, or p. These suffixes are case-insensitive.
Proxy server used for file downloads.
Disable the use of a proxy for file downloads by explicitly setting this option to null.
Check file headers of downloaded files and adjust their filename extensions if they do not match.
For example, this will change the filename extension ({extension}) of a file called example.png from png to jpg when said file contains JPEG/JFIF data.
Controls the behavior when an HTTP response is considered unsuccessful
If the value is true, consume the response body. This avoids closing the connection and therefore improves connection reuse.
If the value is false, immediately close the connection without reading the response. This can be useful if the server is known to send large bodies for error responses.
Number of bytes per downloaded chunk.
Possible values are integer numbers optionally followed by one of k, m. g, t, or p. These suffixes are case-insensitive.
Additional HTTP response status codes to retry a download on.
Codes 200, 206, and 416 (when resuming a partial download) will never be retried and always count as success, regardless of this option.
5xx codes (server error responses) will always be retried, regardless of this option.
Check for invalid responses.
Fail a download when a file does not pass instead of downloading a potentially broken file.
Route youtube-dl's output through gallery-dl's logging system. Otherwise youtube-dl will write its output directly to stdout/stderr.
Note: Set quiet and no_warnings in downloader.ytdl.raw-options to true to suppress all output.
Name of the youtube-dl Python module to import.
Setting this to null will first try to import "yt_dlp" and use "youtube_dl" as fallback.
The Output Template used to generate filenames for files downloaded with youtube-dl.
Special values:
Note: An output template other than null might cause unexpected results in combination with other options (e.g. "skip": "enumerate")
{ "quiet": true, "writesubtitles": true, "merge_output_format": "mkv" }
Additional options passed directly to the YoutubeDL constructor.
All available options can be found in youtube-dl's docstrings.
Controls the output string format and status indicators.
For example, the following will replicate the same output as "mode": "color":
{ "start" : "{}", "success": "\r\u001b[1;32m{}\u001b[0m\n", "skip" : "\u001b[2m{}\u001b[0m\n", "progress" : "\r{0:>7}B {1:>7}B/s ", "progress-total": "\r{3:>3}% {0:>7}B {1:>7}B/s " }
start, success, and skip are used to output the current filename, where {} or {0} is replaced with said filename. If a given format string contains printable characters other than that, their number needs to be specified as [<number>, <format string>] to get the correct results for output.shorten. For example
"start" : [12, "Downloading {}"]
For these format strings
"utf-8"
{ "encoding": "utf-8", "errors": "replace", "line_buffering": true }
Reconfigure a standard stream.
Possible options are
When this option is specified as a simple string, it is interpreted as {"encoding": "<string-value>", "errors": "replace"}
Note: errors always defaults to "replace"
Controls whether the output strings should be shortened to fit on one console line.
Set this option to "eaw" to also work with east-asian characters with a display width greater than 1.
Controls the progress indicator when gallery-dl is run with multiple URLs as arguments.
Configuration for logging output to stderr.
If this is a simple string, it specifies the format string for logging messages.
File to write external URLs unsupported by gallery-dl to.
The default format string here is "{message}".
File to write input URLs which returned an error to.
The default format string here is also "{message}".
When combined with -I/--input-file-comment or -x/--input-file-delete, this option will cause all input URLs from these files to be commented/deleted after processing them and not just successful ones.
This section lists all options available inside Postprocessor Configuration objects.
Each option is titled as <name>.<option>, meaning a post processor of type <name> will look for an <option> field inside its "body". For example an exec post processor will recognize an async, command, and event field:
{ "name" : "exec", "async" : false, "command": "...", "event" : "after" }
{ "Pictures": ["jpg", "jpeg", "png", "gif", "bmp", "svg", "webp"], "Video" : ["flv", "ogv", "avi", "mp4", "mpg", "mpeg", "3gp", "mkv", "webm", "vob", "wmv"], "Music" : ["mp3", "aac", "flac", "ogg", "wma", "m4a", "wav"], "Archives": ["zip", "rar", "7z", "tar", "gz", "bz2"] }
A mapping from directory names to filename extensions that should be stored in them.
Files with an extension not listed will be ignored and stored in their default location.
The action to take when files do not compare as equal.
The action to take when files do compare as equal.
File to store IDs of executed commands in, similar to extractor.*.archive.
archive-format, archive-prefix, and archive-pragma options, akin to extractor.*.archive-format, extractor.*.archive-prefix, and extractor.*.archive-pragma, are supported as well.
The command to run.
The event for which exec.command is run.
See metadata.event for a list of available events.
Selects how to process metadata.
A format string to build the filenames for metadata files with. (see extractor.filename)
Using "-" as filename will write all output to stdout.
If this option is set, metadata.extension and metadata.extension-format will be ignored.
Custom format string to build filename extensions for metadata files with, which will replace the original filename extensions.
Note: metadata.extension is ignored if this option is set.
The event for which metadata gets written to a file.
The available events are:
["blocked", "watching", "status[creator][name]"]
{ "blocked" : "***", "watching" : "\fE 'yes' if watching else 'no'", "status[username]": "{status[creator][name]!l}" }
Custom format string to build the content of metadata files with.
Note: Only applies for "mode": "custom".
Escape all non-ASCII characters.
See the ensure_ascii argument of json.dump() for further details.
Note: Only applies for "mode": "json" and "jsonl".
Indentation level of JSON output.
See the indent argument of json.dump() for further details.
Note: Only applies for "mode": "json".
<item separator> - <key separator> pair to separate JSON keys and values with.
See the separators argument of json.dump() for further details.
Note: Only applies for "mode": "json" and "jsonl".
Sort output by key.
See the sort_keys argument of json.dump() for further details.
Note: Only applies for "mode": "json" and "jsonl".
The mode in which metadata files get opened.
For example, use "a" to append to a file's content or "w" to truncate it.
See the mode argument of the built-in open() function for further details.
Name of the encoding used to encode a file's content.
See the encoding argument of the built-in open() function for further details.
File to store IDs of generated metadata files in, similar to extractor.*.archive.
archive-format, archive-prefix, and archive-pragma options, akin to extractor.*.archive-format, extractor.*.archive-prefix, and extractor.*.archive-pragma, are supported as well.
Set modification times of generated metadata files according to the accompanying downloaded file.
Enabling this option will only have an effect if there is actual mtime metadata available, that is
For example, a metadata post processor for "event": "post" will not be able to set its file's modification time unless an mtime post processor with "event": "post" runs before it.
Name of the metadata field whose value should be used.
This value must be either a UNIX timestamp or a datetime object.
Note: This option gets ignored if mtime.value is set.
A format string whose value should be used.
The resulting value must be either a UNIX timestamp or a datetime object.
File to store IDs of called Python functions in, similar to extractor.*.archive.
archive-format, archive-prefix, and archive-pragma options, akin to extractor.*.archive-format, extractor.*.archive-prefix, and extractor.*.archive-pragma, are supported as well.
The event for which python.function gets called.
See metadata.event for a list of available events.
The Python function to call.
This function gets specified as <module>:<function name> and gets called with the current metadata dict as argument.
module is either an importable Python module name or the Path to a .py file,
FFmpeg demuxer to read and process input files with. Possible values are
"auto" will select mkvmerge if available and fall back to concat otherwise.
Controls FFmpeg output.
Controls the frame rate argument (-r) for FFmpeg
Prevent "width/height not divisible by 2" errors when using libx264 or libx265 encoders by applying a simple cropping filter. See this Stack Overflow thread for more information.
This option, when libx264/5 is used, automatically adds ["-vf", "crop=iw-mod(iw\\,2):ih-mod(ih\\,2)"] to the list of FFmpeg command-line arguments to reduce an odd width/height by 1 pixel and make them even.
Compression method to use when writing the archive.
Possible values are "store", "zip", "bzip2", "lzma".
List of extra files to be added to a ZIP archive.
Note: Relative paths are relative to the current download directory.
"default": Write the central directory file header once after everything is done or an exception is raised.
"safe": Update the central directory file header each time a file is stored in a ZIP archive.
This greatly reduces the chance a ZIP archive gets corrupted in case the Python interpreter gets shut down unexpectedly (power outage, SIGKILL) but is also a lot slower.
List of directories to load external extractor modules from.
Any file in a specified directory with a .py filename extension gets imported and searched for potential extractors, i.e. classes with a pattern attribute.
Note: null references internal extractors defined in extractor/__init__.py or by extractor.modules.
Path of the SQLite3 database used to cache login sessions, cookies and API tokens across gallery-dl invocations.
Set this option to null or an invalid path to disable this cache.
Character(s) used as argument separator in format string format specifiers.
For example, setting this option to "#" would allow a replacement operation to be Rold#new# instead of the default Rold/new/
All configuration keys listed in this section have fully functional default values embedded into gallery-dl itself, but if things unexpectedly break or you want to use your own personal client credentials, you can follow these instructions to get an alternative set of API tokens and IDs.
A Date value represents a specific point in time.
A Duration represents a span of time in seconds.
A Path is a string representing the location of a file or directory.
Simple tilde expansion and environment variable expansion is supported.
In Windows environments, backslashes ("\") can, in addition to forward slashes ("/"), be used as path separators. Because backslashes are JSON's escape character, they themselves have to be escaped. The path C:\path\to\file.ext has therefore to be written as "C:\\path\\to\\file.ext" if you want to use backslashes.
{ "format" : "{asctime} {name}: {message}", "format-date": "%H:%M:%S", "path" : "~/log.txt", "encoding" : "ascii" }
{ "level" : "debug", "format": { "debug" : "debug: {message}", "info" : "[{name}] {message}", "warning": "Warning: {message}", "error" : "ERROR: {message}" } }
Extended logging output configuration.
General format string for logging messages or a dictionary with format strings for each loglevel.
In addition to the default LogRecord attributes, it is also possible to access the current extractor, job, path, and keywords objects and their attributes, for example "{extractor.url}", "{path.filename}", "{keywords.title}"
Default: "[{name}][{levelname}] {message}"
Note: path, mode, and encoding are only applied when configuring logging output to a file.
{ "name": "mtime" }
{ "name" : "zip", "compression": "store", "extension" : "cbz", "filter" : "extension not in ('zip', 'rar')", "whitelist" : ["mangadex", "exhentai", "nhentai"] }
An object containing a "name" attribute specifying the post-processor type, as well as any of its options.
It is possible to set a "filter" expression similar to image-filter to only run a post-processor conditionally.
It is also possible set a "whitelist" or "blacklist" to only enable or disable a post-processor for the specified extractor categories.
The available post-processor types are