Compare commits

..

3 commits

Author SHA1 Message Date
mkb79
bcde02e6af
update build.yml 2023-09-27 09:17:56 +02:00
mkb79
c10ed82985
update build.yml 2023-09-27 08:40:21 +02:00
mkb79
1dcb605a3e
ci: rework build gh action 2023-09-27 07:55:29 +02:00
17 changed files with 199 additions and 1139 deletions

13
.github/FUNDING.yml vendored
View file

@ -1,13 +0,0 @@
# These are supported funding model platforms
github: [mkb79] # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

View file

@ -95,8 +95,9 @@ jobs:
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
RELEASE_URL: ${{ needs.createrelease.outputs.release_url }}
with:
upload_url: ${{ needs.createrelease.outputs.release_url }}
upload_url: ${{ RELEASE_URL }}
asset_path: ./dist/${{ matrix.OUT_FILE_NAME}}
asset_name: ${{ matrix.OUT_FILE_NAME}}
asset_content_type: ${{ matrix.ASSET_MIME}}

View file

@ -6,73 +6,9 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## Unreleased
### Bugfix
-
- Fixing `[Errno 18] Invalid cross-device link` when downloading files using the `--output-dir` option. This error is fixed by creating the resume file on the same location as the target file.
### Added
- The `--chapter-type` option is added to the download command. Chapter can now be
downloaded as `flat` or `tree` type. `tree` is the default. A default chapter type
can be set in the config file.
### Changed
- Improved podcast ignore feature in download command
- make `--ignore-podcasts` and `--resolve-podcasts` options of download command mutual
exclusive
- Switched from a HEAD to a GET request without loading the body in the downloader
class. This change improves the program's speed, as the HEAD request was taking
considerably longer than a GET request on some Audible pages.
- `models.LibraryItem.get_content_metadatata` now accept a `chapter_type` argument.
Additional keyword arguments to this method are now passed through the metadata
request.
- Update httpx version range to >=0.23.3 and <0.28.0.
- fix typo from `resolve_podcats` to `resolve_podcasts`
- `models.Library.resolve_podcats` is now deprecated and will be removed in a future version
## [0.3.1] - 2024-03-19
### Bugfix
- fix a `TypeError` on some Python versions when calling `importlib.metadata.entry_points` with group argument
## [0.3.0] - 2024-03-19
### Added
- Added a resume feature when downloading aaxc files.
- New `downlaoder` module which contains a rework of the Downloader class.
- If necessary, large audiobooks are now downloaded in parts.
- Plugin command help page now contains additional information about the source of
the plugin.
- Command help text now starts with ´(P)` for plugin commands.
### Changed
- Rework plugin module
- using importlib.metadata over setuptools (pkg_resources) to get entrypoints
## [0.2.6] - 2023-11-16
### Added
- Update marketplace choices in `manage auth-file add` command. Now all available marketplaces are listed.
### Bugfix
- Avoid tqdm progress bar interruption by loggers output to console.
- Fixing an issue with unawaited coroutines when the download command exited abnormal.
### Changed
- Update httpx version range to >=0.23.3 and <0.26.0.
### Misc
- add `freeze_support` to pyinstaller entry script (#78)
## [0.2.5] - 2023-09-26
## [0.2.5] - 2022-09-26
### Added
@ -120,7 +56,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
- by default a licenserequest (voucher) will not include chapter information by default
- moved licenserequest part from `models.LibraryItem.get_aaxc_url` to its own `models.LibraryItem.get_license` function
- allow book titles with hyphens (#96)
- allow book tiltes with hyphens (#96)
- if there is no title fallback to an empty string (#98)
- reduce `response_groups` for the download command to speed up fetching the library (#109)
@ -128,7 +64,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
- `Extreme` quality is not supported by the Audible API anymore (#107)
- download command continued execution after error (#104)
- Currently, paths with dots will break the decryption (#97)
- Currently paths with dots will break the decryption (#97)
- `models.Library.from_api_full_sync` called `models.Library.from_api` with incorrect keyword arguments
### Misc
@ -191,7 +127,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
### Added
- the `--version` option now checks if an update for `audible-cli` is available
- build macOS releases in `onedir` mode
- build macOS releases in onedir mode
### Bugfix

View file

@ -40,14 +40,6 @@ pip install .
```
or as the best solution using [pipx](https://pipx.pypa.io/stable/)
```shell
pipx install audible-cli
```
## Standalone executables
If you don't want to install `Python` and `audible-cli` on your machine, you can
@ -57,8 +49,9 @@ page (including beta releases). At this moment Windows, Linux and macOS are supp
### Links
1. Linux
- [debian 11 onefile](https://github.com/mkb79/audible-cli/releases/latest/download/audible_linux_debian_11.zip)
- [ubuntu latest onefile](https://github.com/mkb79/audible-cli/releases/latest/download/audible_linux_ubuntu_latest.zip)
- [ubuntu 20.04 onefile](https://github.com/mkb79/audible-cli/releases/latest/download/audible_linux_ubuntu_20_04.zip)
- [ubuntu 18.04 onefile](https://github.com/mkb79/audible-cli/releases/latest/download/audible_linux_ubuntu_18_04.zip)
2. macOS
- [macOS latest onefile](https://github.com/mkb79/audible-cli/releases/latest/download/audible_mac.zip)
@ -154,11 +147,7 @@ The APP section supports the following options:
- primary_profile: The profile to use, if no other is specified
- filename_mode: When using the `download` command, a filename mode can be
specified here. If not present, "ascii" will be used as default. To override
these option, you can provide a mode with the `--filename-mode` option of the
download command.
- chapter_type: When using the `download` command, a chapter type can be specified
here. If not present, "tree" will be used as default. To override
these option, you can provide a type with the `--chapter-type` option of the
these option, you can provide a mode with the `filename-mode` option of the
download command.
#### Profile section
@ -166,7 +155,6 @@ The APP section supports the following options:
- auth_file: The auth file for this profile
- country_code: The marketplace for this profile
- filename_mode: See APP section above. Will override the option in APP section.
- chapter_type: See APP section above. Will override the option in APP section.
## Getting started

View file

@ -19,6 +19,7 @@ import typing as t
from enum import Enum
from functools import reduce
from glob import glob
from shlex import quote
from shutil import which
import click
@ -63,7 +64,7 @@ def _get_input_files(
and '*' not in filename
and not SupportedFiles.is_supported_file(filename)
):
raise click.BadParameter("{filename}: file not found or supported.")
raise(click.BadParameter("{filename}: file not found or supported."))
expanded_filter = filter(
lambda x: SupportedFiles.is_supported_file(x), expanded
@ -130,7 +131,7 @@ class ApiChapterInfo:
def count_chapters(self):
return len(self.get_chapters())
def get_chapters(self, separate_intro_outro=False, remove_intro_outro=False):
def get_chapters(self, separate_intro_outro=False):
def extract_chapters(initial, current):
if "chapters" in current:
return initial + [current] + current["chapters"]
@ -147,8 +148,6 @@ class ApiChapterInfo:
if separate_intro_outro:
return self._separate_intro_outro(chapters)
elif remove_intro_outro:
return self._remove_intro_outro(chapters)
return chapters
@ -200,24 +199,6 @@ class ApiChapterInfo:
return chapters
def _remove_intro_outro(self, chapters):
echo("Delete Audible Brand Intro and Outro.")
chapters.sort(key=operator.itemgetter("start_offset_ms"))
intro_dur_ms = self.get_intro_duration_ms()
outro_dur_ms = self.get_outro_duration_ms()
first = chapters[0]
first["length_ms"] -= intro_dur_ms
for chapter in chapters[1:]:
chapter["start_offset_ms"] -= intro_dur_ms
chapter["start_offset_sec"] -= round(chapter["start_offset_ms"] / 1000)
last = chapters[-1]
last["length_ms"] -= outro_dur_ms
return chapters
class FFMeta:
SECTION = re.compile(r"\[(?P<header>[^]]+)\]")
@ -290,23 +271,18 @@ class FFMeta:
def update_chapters_from_chapter_info(
self,
chapter_info: ApiChapterInfo,
force_rebuild_chapters: bool = False,
separate_intro_outro: bool = False,
remove_intro_outro: bool = False
separate_intro_outro: bool = False
) -> None:
if not chapter_info.is_accurate():
echo("Metadata from API is not accurate. Skip.")
return
if chapter_info.count_chapters() != self.count_chapters():
if force_rebuild_chapters:
echo("Force rebuild chapters due to chapter mismatch.")
else:
raise ChapterError("Chapter mismatch")
raise ChapterError("Chapter mismatch")
echo(f"Found {chapter_info.count_chapters()} chapters to prepare.")
echo(f"Found {self.count_chapters()} chapters to prepare.")
api_chapters = chapter_info.get_chapters(separate_intro_outro, remove_intro_outro)
api_chapters = chapter_info.get_chapters(separate_intro_outro)
num_chap = 0
new_chapters = {}
@ -321,20 +297,6 @@ class FFMeta:
"title": chapter["title"],
}
self._ffmeta_parsed["CHAPTER"] = new_chapters
def get_start_end_without_intro_outro(
self,
chapter_info: ApiChapterInfo,
):
intro_dur_ms = chapter_info.get_intro_duration_ms()
outro_dur_ms = chapter_info.get_outro_duration_ms()
total_runtime_ms = chapter_info.get_runtime_length_ms()
start_new = intro_dur_ms
duration_new = total_runtime_ms - intro_dur_ms - outro_dur_ms
return start_new, duration_new
def _get_voucher_filename(file: pathlib.Path) -> pathlib.Path:
@ -359,12 +321,9 @@ class FfmpegFileDecrypter:
target_dir: pathlib.Path,
tempdir: pathlib.Path,
activation_bytes: t.Optional[str],
overwrite: bool,
rebuild_chapters: bool,
force_rebuild_chapters: bool,
skip_rebuild_chapters: bool,
separate_intro_outro: bool,
remove_intro_outro: bool
ignore_missing_chapters: bool,
separate_intro_outro: bool
) -> None:
file_type = SupportedFiles(file.suffix)
@ -384,12 +343,9 @@ class FfmpegFileDecrypter:
self._credentials: t.Optional[t.Union[str, t.Tuple[str]]] = credentials
self._target_dir = target_dir
self._tempdir = tempdir
self._overwrite = overwrite
self._rebuild_chapters = rebuild_chapters
self._force_rebuild_chapters = force_rebuild_chapters
self._skip_rebuild_chapters = skip_rebuild_chapters
self._ignore_missing_chapters = ignore_missing_chapters
self._separate_intro_outro = separate_intro_outro
self._remove_intro_outro = remove_intro_outro
self._api_chapter: t.Optional[ApiChapterInfo] = None
self._ffmeta: t.Optional[FFMeta] = None
self._is_rebuilded: bool = False
@ -421,20 +377,20 @@ class FfmpegFileDecrypter:
key, iv = self._credentials
credentials_cmd = [
"-audible_key",
key,
quote(key),
"-audible_iv",
iv,
quote(iv),
]
else:
credentials_cmd = [
"-activation_bytes",
self._credentials,
quote(self._credentials),
]
base_cmd.extend(credentials_cmd)
extract_cmd = [
"-i",
str(self._source),
quote(str(self._source)),
"-f",
"ffmetadata",
str(metafile),
@ -449,7 +405,7 @@ class FfmpegFileDecrypter:
def rebuild_chapters(self) -> None:
if not self._is_rebuilded:
self.ffmeta.update_chapters_from_chapter_info(
self.api_chapter, self._force_rebuild_chapters, self._separate_intro_outro, self._remove_intro_outro
self.api_chapter, self._separate_intro_outro
)
self._is_rebuilded = True
@ -458,11 +414,8 @@ class FfmpegFileDecrypter:
outfile = self._target_dir / oname
if outfile.exists():
if self._overwrite:
secho(f"Overwrite {outfile}: already exists", fg="blue")
else:
secho(f"Skip {outfile}: already exists", fg="blue")
return
secho(f"Skip {outfile}: already exists", fg="blue")
return
base_cmd = [
"ffmpeg",
@ -470,22 +423,26 @@ class FfmpegFileDecrypter:
"quiet",
"-stats",
]
if self._overwrite:
base_cmd.append("-y")
if isinstance(self._credentials, tuple):
key, iv = self._credentials
credentials_cmd = [
"-audible_key",
key,
quote(key),
"-audible_iv",
iv,
quote(iv),
]
else:
credentials_cmd = [
"-activation_bytes",
self._credentials,
quote(self._credentials),
]
base_cmd.extend(credentials_cmd)
base_cmd.extend(
[
"-i",
quote(str(self._source)),
]
)
if self._rebuild_chapters:
metafile = _get_ffmeta_file(self._source, self._tempdir)
@ -493,56 +450,25 @@ class FfmpegFileDecrypter:
self.rebuild_chapters()
self.ffmeta.write(metafile)
except ChapterError:
if self._skip_rebuild_chapters:
echo("Skip rebuild chapters due to chapter mismatch.")
else:
if not self._ignore_missing_chapters:
raise
else:
if self._remove_intro_outro:
start_new, duration_new = self.ffmeta.get_start_end_without_intro_outro(self.api_chapter)
base_cmd.extend(
[
"-ss",
f"{start_new}ms",
"-t",
f"{duration_new}ms",
"-i",
str(self._source),
"-i",
str(metafile),
"-map_metadata",
"0",
"-map_chapters",
"1",
]
)
else:
base_cmd.extend(
[
"-i",
str(self._source),
"-i",
str(metafile),
"-map_metadata",
"0",
"-map_chapters",
"1",
]
)
else:
base_cmd.extend(
[
"-i",
str(self._source),
]
)
base_cmd.extend(
[
"-i",
quote(str(metafile)),
"-map_metadata",
"0",
"-map_chapters",
"1",
]
)
base_cmd.extend(
[
"-c",
"copy",
str(outfile),
quote(str(outfile)),
]
)
@ -575,25 +501,6 @@ class FfmpegFileDecrypter:
is_flag=True,
help="Rebuild chapters with chapters from voucher or chapter file."
)
@click.option(
"--force-rebuild-chapters",
"-f",
is_flag=True,
help=(
"Force rebuild chapters with chapters from voucher or chapter file "
"if the built-in chapters in the audio file mismatch. "
"Only use with `--rebuild-chapters`."
),
)
@click.option(
"--skip-rebuild-chapters",
"-t",
is_flag=True,
help=(
"Decrypt without rebuilding chapters when chapters mismatch. "
"Only use with `--rebuild-chapters`."
),
)
@click.option(
"--separate-intro-outro",
"-s",
@ -604,11 +511,12 @@ class FfmpegFileDecrypter:
),
)
@click.option(
"--remove-intro-outro",
"-c",
"--ignore-missing-chapters",
"-t",
is_flag=True,
help=(
"Remove Audible Brand Intro and Outro. "
"Decrypt without rebuilding chapters when chapters are not present. "
"Otherwise an item is skipped when this option is not provided. "
"Only use with `--rebuild-chapters`."
),
)
@ -620,10 +528,8 @@ def cli(
all_: bool,
overwrite: bool,
rebuild_chapters: bool,
force_rebuild_chapters: bool,
skip_rebuild_chapters: bool,
separate_intro_outro: bool,
remove_intro_outro: bool,
ignore_missing_chapters: bool
):
"""Decrypt audiobooks downloaded with audible-cli.
@ -637,30 +543,15 @@ def cli(
ctx = click.get_current_context()
ctx.fail("ffmpeg not found")
if (force_rebuild_chapters or skip_rebuild_chapters or separate_intro_outro or remove_intro_outro) and not rebuild_chapters:
if (separate_intro_outro or ignore_missing_chapters) and not rebuild_chapters:
raise click.BadOptionUsage(
"",
"`--force-rebuild-chapters`, `--skip-rebuild-chapters`, `--separate-intro-outro` "
"and `--remove-intro-outro` can only be used together with `--rebuild-chapters`"
)
if force_rebuild_chapters and skip_rebuild_chapters:
raise click.BadOptionUsage(
"",
"`--force-rebuild-chapters` and `--skip-rebuild-chapters` can "
"not be used together"
)
if separate_intro_outro and remove_intro_outro:
raise click.BadOptionUsage(
"",
"`--separate-intro-outro` and `--remove-intro-outro` can not be used together"
"`--separate-intro-outro` and `--ignore-missing-chapters` can "
"only be used together with `--rebuild-chapters`"
)
if all_:
if files:
raise click.BadOptionUsage(
"",
"If using `--all`, no FILES arguments can be used."
)
files = [f"*{suffix}" for suffix in SupportedFiles.get_supported_list()]
@ -673,11 +564,8 @@ def cli(
target_dir=pathlib.Path(directory).resolve(),
tempdir=pathlib.Path(tempdir).resolve(),
activation_bytes=session.auth.activation_bytes,
overwrite=overwrite,
rebuild_chapters=rebuild_chapters,
force_rebuild_chapters=force_rebuild_chapters,
skip_rebuild_chapters=skip_rebuild_chapters,
separate_intro_outro=separate_intro_outro,
remove_intro_outro=remove_intro_outro
ignore_missing_chapters=ignore_missing_chapters,
separate_intro_outro=separate_intro_outro
)
decrypter.run()

View file

@ -1,9 +1,4 @@
import multiprocessing
from audible_cli import cli
multiprocessing.freeze_support()
if __name__ == '__main__':
from audible_cli import cli
cli.main()
cli.main()

View file

@ -49,14 +49,13 @@ setup(
"audible>=0.8.2",
"click>=8",
"colorama; platform_system=='Windows'",
"httpx>=0.23.3,<0.28.0",
"httpx>=0.20.0,<0.24.0",
"packaging",
"Pillow",
"tabulate",
"toml",
"tqdm",
"questionary",
"importlib-metadata; python_version<'3.10'",
"questionary"
],
extras_require={
'pyi': [

View file

@ -4,7 +4,6 @@ from typing import Optional, Union
from warnings import warn
import click
from tqdm import tqdm
audible_cli_logger = logging.getLogger("audible_cli")
@ -101,13 +100,10 @@ class ClickHandler(logging.Handler):
try:
msg = self.format(record)
level = record.levelname.lower()
# Avoid tqdm progress bar interruption by logger's output to console
with tqdm.external_write_mode():
if self.echo_kwargs.get(level):
click.echo(msg, **self.echo_kwargs[level])
else:
click.echo(msg)
if self.echo_kwargs.get(level):
click.echo(msg, **self.echo_kwargs[level])
else:
click.echo(msg)
except Exception:
self.handleError(record)

View file

@ -1,7 +1,7 @@
__title__ = "audible-cli"
__description__ = "Command line interface (cli) for the audible package."
__url__ = "https://github.com/mkb79/audible-cli"
__version__ = "0.3.2b3"
__version__ = "0.2.5"
__author__ = "mkb79"
__author_email__ = "mkb79@hackitall.de"
__license__ = "AGPL"

View file

@ -1,6 +1,6 @@
import asyncio
import logging
import sys
from pkg_resources import iter_entry_points
import click
@ -17,11 +17,6 @@ from .exceptions import AudibleCliException
from ._logging import click_basic_config
from . import plugins
if sys.version_info >= (3, 10):
from importlib.metadata import entry_points
else: # Python < 3.10 (backport)
from importlib_metadata import entry_points
logger = logging.getLogger("audible_cli")
click_basic_config(logger)
@ -30,7 +25,7 @@ CONTEXT_SETTINGS = dict(help_option_names=["-h", "--help"])
@plugins.from_folder(get_plugin_dir())
@plugins.from_entry_point(entry_points(group=PLUGIN_ENTRY_POINT))
@plugins.from_entry_point(iter_entry_points(PLUGIN_ENTRY_POINT))
@build_in_cmds
@click.group(context_settings=CONTEXT_SETTINGS)
@profile_option
@ -66,9 +61,6 @@ def main(*args, **kwargs):
except click.Abort:
logger.error("Aborted")
sys.exit(1)
except asyncio.CancelledError:
logger.error("Aborted with Asyncio CancelledError")
sys.exit(2)
except AudibleCliException as e:
logger.error(e)
sys.exit(2)

View file

@ -21,7 +21,6 @@ from ..decorators import (
pass_client,
pass_session
)
from ..downloader import Downloader as NewDownloader, Status
from ..exceptions import (
AudibleCliException,
DirectoryDoesNotExists,
@ -39,8 +38,6 @@ CLIENT_HEADERS = {
"User-Agent": "Audible/671 CFNetwork/1240.0.4 Darwin/20.6.0"
}
QUEUE = None
class DownloadCounter:
def __init__(self):
@ -203,7 +200,7 @@ async def download_pdf(
async def download_chapters(
output_dir, base_filename, item, quality, overwrite_existing, chapter_type
output_dir, base_filename, item, quality, overwrite_existing
):
if not output_dir.is_dir():
raise DirectoryDoesNotExists(output_dir)
@ -217,7 +214,7 @@ async def download_chapters(
return True
try:
metadata = await item.get_content_metadata(quality, chapter_type=chapter_type)
metadata = await item.get_content_metadata(quality)
except NotFoundError:
logger.info(
f"No chapters found for {item.full_title}."
@ -226,7 +223,7 @@ async def download_chapters(
metadata = json.dumps(metadata, indent=4)
async with aiofiles.open(file, "w") as f:
await f.write(metadata)
logger.info(f"Chapter file saved in style '{chapter_type.upper()}' to {file}.")
logger.info(f"Chapter file saved to {file}.")
counter.count_chapter()
@ -258,56 +255,9 @@ async def download_annotations(
counter.count_annotation()
async def _get_audioparts(item):
parts = []
child_library: Library = await item.get_child_items()
if child_library is not None:
for child in child_library:
if (
child.content_delivery_type is not None
and child.content_delivery_type == "AudioPart"
):
parts.append(child)
return parts
async def _add_audioparts_to_queue(
client, output_dir, filename_mode, item, quality, overwrite_existing,
aax_fallback, download_mode
):
parts = await _get_audioparts(item)
if download_mode == "aax":
get_aax = True
get_aaxc = False
else:
get_aax = False
get_aaxc = True
for part in parts:
queue_job(
get_cover=None,
get_pdf=None,
get_annotation=None,
get_chapters=None,
chapter_type=None,
get_aax=get_aax,
get_aaxc=get_aaxc,
client=client,
output_dir=output_dir,
filename_mode=filename_mode,
item=part,
cover_sizes=None,
quality=quality,
overwrite_existing=overwrite_existing,
aax_fallback=aax_fallback
)
async def download_aax(
client, output_dir, base_filename, item, quality, overwrite_existing,
aax_fallback, filename_mode
aax_fallback
):
# url, codec = await item.get_aax_url(quality)
try:
@ -321,39 +271,20 @@ async def download_aax(
base_filename=base_filename,
item=item,
quality=quality,
overwrite_existing=overwrite_existing,
filename_mode=filename_mode
overwrite_existing=overwrite_existing
)
raise
filename = base_filename + f"-{codec}.aax"
filepath = output_dir / filename
dl = NewDownloader(
source=url,
client=client,
expected_types=[
"audio/aax", "audio/vnd.audible.aax", "audio/audible"
]
dl = Downloader(
url, filepath, client, overwrite_existing,
["audio/aax", "audio/vnd.audible.aax", "audio/audible"]
)
downloaded = await dl.run(target=filepath, force_reload=overwrite_existing)
downloaded = await dl.run(pb=True)
if downloaded.status == Status.Success:
if downloaded:
counter.count_aax()
elif downloaded.status == Status.DownloadIndividualParts:
logger.info(
f"Item {filepath} must be downloaded in parts. Adding parts to queue"
)
await _add_audioparts_to_queue(
client=client,
output_dir=output_dir,
filename_mode=filename_mode,
item=item,
quality=quality,
overwrite_existing=overwrite_existing,
download_mode="aax",
aax_fallback=aax_fallback,
)
async def _reuse_voucher(lr_file, item):
@ -407,8 +338,8 @@ async def _reuse_voucher(lr_file, item):
async def download_aaxc(
client, output_dir, base_filename, item, quality, overwrite_existing,
filename_mode
client, output_dir, base_filename, item,
quality, overwrite_existing
):
lr, url, codec = None, None, None
@ -467,50 +398,39 @@ async def download_aaxc(
logger.info(f"Voucher file saved to {lr_file}.")
counter.count_voucher_saved()
dl = NewDownloader(
source=url,
client=client,
expected_types=[
dl = Downloader(
url,
filepath,
client,
overwrite_existing,
[
"audio/aax", "audio/vnd.audible.aax", "audio/mpeg", "audio/x-m4a",
"audio/audible"
],
]
)
downloaded = await dl.run(target=filepath, force_reload=overwrite_existing)
downloaded = await dl.run(pb=True)
if downloaded.status == Status.Success:
if downloaded:
counter.count_aaxc()
if is_aycl:
counter.count_aycl()
elif downloaded.status == Status.DownloadIndividualParts:
logger.info(
f"Item {filepath} must be downloaded in parts. Adding parts to queue"
)
await _add_audioparts_to_queue(
client=client,
output_dir=output_dir,
filename_mode=filename_mode,
item=item,
quality=quality,
overwrite_existing=overwrite_existing,
aax_fallback=False,
download_mode="aaxc"
)
async def consume(ignore_errors):
async def consume(queue, ignore_errors):
while True:
cmd, kwargs = await QUEUE.get()
item = await queue.get()
try:
await cmd(**kwargs)
await item
except Exception as e:
logger.error(e)
if not ignore_errors:
raise
finally:
QUEUE.task_done()
queue.task_done()
def queue_job(
queue,
get_cover,
get_pdf,
get_annotation,
@ -522,7 +442,6 @@ def queue_job(
filename_mode,
item,
cover_sizes,
chapter_type,
quality,
overwrite_existing,
aax_fallback
@ -531,76 +450,73 @@ def queue_job(
if get_cover:
for cover_size in cover_sizes:
cmd = download_cover
kwargs = {
"client": client,
"output_dir": output_dir,
"base_filename": base_filename,
"item": item,
"res": cover_size,
"overwrite_existing": overwrite_existing
}
QUEUE.put_nowait((cmd, kwargs))
queue.put_nowait(
download_cover(
client=client,
output_dir=output_dir,
base_filename=base_filename,
item=item,
res=cover_size,
overwrite_existing=overwrite_existing
)
)
if get_pdf:
cmd = download_pdf
kwargs = {
"client": client,
"output_dir": output_dir,
"base_filename": base_filename,
"item": item,
"overwrite_existing": overwrite_existing
}
QUEUE.put_nowait((cmd, kwargs))
queue.put_nowait(
download_pdf(
client=client,
output_dir=output_dir,
base_filename=base_filename,
item=item,
overwrite_existing=overwrite_existing
)
)
if get_chapters:
cmd = download_chapters
kwargs = {
"output_dir": output_dir,
"base_filename": base_filename,
"item": item,
"quality": quality,
"overwrite_existing": overwrite_existing,
"chapter_type": chapter_type
}
QUEUE.put_nowait((cmd, kwargs))
queue.put_nowait(
download_chapters(
output_dir=output_dir,
base_filename=base_filename,
item=item,
quality=quality,
overwrite_existing=overwrite_existing
)
)
if get_annotation:
cmd = download_annotations
kwargs = {
"output_dir": output_dir,
"base_filename": base_filename,
"item": item,
"overwrite_existing": overwrite_existing
}
QUEUE.put_nowait((cmd, kwargs))
queue.put_nowait(
download_annotations(
output_dir=output_dir,
base_filename=base_filename,
item=item,
overwrite_existing=overwrite_existing
)
)
if get_aax:
cmd = download_aax
kwargs = {
"client": client,
"output_dir": output_dir,
"base_filename": base_filename,
"item": item,
"quality": quality,
"overwrite_existing": overwrite_existing,
"aax_fallback": aax_fallback,
"filename_mode": filename_mode
}
QUEUE.put_nowait((cmd, kwargs))
queue.put_nowait(
download_aax(
client=client,
output_dir=output_dir,
base_filename=base_filename,
item=item,
quality=quality,
overwrite_existing=overwrite_existing,
aax_fallback=aax_fallback
)
)
if get_aaxc:
cmd = download_aaxc
kwargs = {
"client": client,
"output_dir": output_dir,
"base_filename": base_filename,
"item": item,
"quality": quality,
"overwrite_existing": overwrite_existing,
"filename_mode": filename_mode
}
QUEUE.put_nowait((cmd, kwargs))
queue.put_nowait(
download_aaxc(
client=client,
output_dir=output_dir,
base_filename=base_filename,
item=item,
quality=quality,
overwrite_existing=overwrite_existing
)
)
def display_counter():
@ -689,13 +605,7 @@ def display_counter():
@click.option(
"--chapter",
is_flag=True,
help="Saves chapter metadata as JSON file."
)
@click.option(
"--chapter-type",
default="config",
type=click.Choice(["Flat", "Tree", "config"], case_sensitive=False),
help="The chapter type."
help="saves chapter metadata as JSON file"
)
@click.option(
"--annotation",
@ -758,10 +668,8 @@ async def cli(session, api_client, **params):
asins = params.get("asin")
titles = params.get("title")
if get_all and (asins or titles):
raise click.BadOptionUsage(
"--all",
"`--all` can not be used together with `--asin` or `--title`"
)
logger.error(f"Do not mix *asin* or *title* option with *all* option.")
click.Abort()
# what to download
get_aax = params.get("aax")
@ -782,10 +690,8 @@ async def cli(session, api_client, **params):
if not any(
[get_aax, get_aaxc, get_annotation, get_chapters, get_cover, get_pdf]
):
raise click.BadOptionUsage(
"",
"Please select an option what you want download."
)
logger.error("Please select an option what you want download.")
raise click.Abort()
# additional options
sim_jobs = params.get("jobs")
@ -794,22 +700,15 @@ async def cli(session, api_client, **params):
overwrite_existing = params.get("overwrite")
ignore_errors = params.get("ignore_errors")
no_confirm = params.get("no_confirm")
resolve_podcasts = params.get("resolve_podcasts")
resolve_podcats = params.get("resolve_podcasts")
ignore_podcasts = params.get("ignore_podcasts")
if all([resolve_podcasts, ignore_podcasts]):
raise click.BadOptionUsage(
"",
"Do not mix *ignore-podcasts* with *resolve-podcasts* option."
)
bunch_size = session.params.get("bunch_size")
start_date = session.params.get("start_date")
end_date = session.params.get("end_date")
if all([start_date, end_date]) and start_date > end_date:
raise click.BadOptionUsage(
"",
"start date must be before or equal the end date"
)
logger.error("start date must be before or equal the end date")
raise click.Abort()
if start_date is not None:
logger.info(
@ -820,11 +719,6 @@ async def cli(session, api_client, **params):
f"Selected end date: {end_date.strftime('%Y-%m-%dT%H:%M:%S.%fZ')}"
)
chapter_type = params.get("chapter_type")
if chapter_type == "config":
chapter_type = session.config.get_profile_option(
session.selected_profile, "chapter_type") or "Tree"
filename_mode = params.get("filename_mode")
if filename_mode == "config":
filename_mode = session.config.get_profile_option(
@ -844,9 +738,8 @@ async def cli(session, api_client, **params):
status="Active",
)
if resolve_podcasts:
await library.resolve_podcasts(start_date=start_date, end_date=end_date)
[library.data.remove(i) for i in library if i.is_parent_podcast()]
if resolve_podcats:
await library.resolve_podcats(start_date=start_date, end_date=end_date)
# collect jobs
jobs = []
@ -863,7 +756,7 @@ async def cli(session, api_client, **params):
else:
if not ignore_errors:
logger.error(f"Asin {asin} not found in library.")
raise click.Abort()
click.Abort()
logger.error(
f"Skip asin {asin}: Not found in library"
)
@ -895,19 +788,13 @@ async def cli(session, api_client, **params):
f"Skip title {title}: Not found in library"
)
# set queue
global QUEUE
QUEUE = asyncio.Queue()
queue = asyncio.Queue()
for job in jobs:
item = library.get_item_by_asin(job)
items = [item]
odir = pathlib.Path(output_dir)
if item.is_parent_podcast():
if ignore_podcasts:
continue
if not ignore_podcasts and item.is_parent_podcast():
items.remove(item)
if item._children is None:
await item.get_child_items(
@ -925,6 +812,7 @@ async def cli(session, api_client, **params):
for item in items:
queue_job(
queue=queue,
get_cover=get_cover,
get_pdf=get_pdf,
get_annotation=get_annotation,
@ -936,19 +824,19 @@ async def cli(session, api_client, **params):
filename_mode=filename_mode,
item=item,
cover_sizes=cover_sizes,
chapter_type=chapter_type,
quality=quality,
overwrite_existing=overwrite_existing,
aax_fallback=aax_fallback
)
# schedule the consumer
consumers = [
asyncio.ensure_future(consume(ignore_errors)) for _ in range(sim_jobs)
]
try:
# schedule the consumer
consumers = [
asyncio.ensure_future(consume(queue, ignore_errors)) for _ in range(sim_jobs)
]
# wait until the consumer has processed all items
await QUEUE.join()
await queue.join()
finally:
# the consumer is still awaiting an item, cancel it
for consumer in consumers:

View file

@ -45,7 +45,7 @@ async def _get_library(session, client, resolve_podcasts):
)
if resolve_podcasts:
await library.resolve_podcasts(start_date=start_date, end_date=end_date)
await library.resolve_podcats(start_date=start_date, end_date=end_date)
return library

View file

@ -160,7 +160,7 @@ def check_if_auth_file_not_exists(session, ctx, param, value):
)
@click.option(
"--country-code", "-cc",
type=click.Choice(AVAILABLE_MARKETPLACES),
type=click.Choice(["us", "ca", "uk", "au", "fr", "de", "jp", "it", "in"]),
prompt="Please enter the country code",
help="The country code for the marketplace you want to authenticate."
)

View file

@ -95,7 +95,7 @@ def version_option(func=None, **kwargs):
response.raise_for_status()
except Exception as e:
logger.error(e)
raise click.Abort()
click.Abort()
content = response.json()
@ -201,7 +201,7 @@ def timeout_option(func=None, **kwargs):
return value
kwargs.setdefault("type", click.INT)
kwargs.setdefault("default", 30)
kwargs.setdefault("default", 10)
kwargs.setdefault("show_default", True)
kwargs.setdefault(
"help", ("Increase the timeout time if you got any TimeoutErrors. "

View file

@ -1,563 +0,0 @@
import logging
import pathlib
import re
from enum import Enum, auto
from typing import Any, Callable, Dict, List, NamedTuple, Optional, Union
import aiofiles
import click
import httpx
import tqdm
from aiofiles.os import path, unlink
try:
from typing import Literal
except ImportError:
from typing_extensions import Literal
FileMode = Literal["ab", "wb"]
logger = logging.getLogger("audible_cli.downloader")
ACCEPT_RANGES_HEADER = "Accept-Ranges"
ACCEPT_RANGES_NONE_VALUE = "none"
CONTENT_LENGTH_HEADER = "Content-Length"
CONTENT_TYPE_HEADER = "Content-Type"
MAX_FILE_READ_SIZE = 3 * 1024 * 1024
ETAG_HEADER = "ETag"
class ETag:
def __init__(self, etag: str) -> None:
self._etag = etag
@property
def value(self) -> str:
return self._etag
@property
def parsed_etag(self) -> str:
return re.search('"([^"]*)"', self.value).group(1)
@property
def is_weak(self) -> bool:
return bool(re.search("^W/", self.value))
class File:
def __init__(self, file: Union[pathlib.Path, str]) -> None:
if not isinstance(file, pathlib.Path):
file = pathlib.Path(file)
self._file = file
@property
def path(self) -> pathlib.Path:
return self._file
async def get_size(self) -> int:
if await path.isfile(self.path):
return await path.getsize(self.path)
return 0
async def remove(self) -> None:
if await path.isfile(self.path):
await unlink(self.path)
async def directory_exists(self) -> bool:
return await path.isdir(self.path.parent)
async def is_file(self) -> bool:
return await path.isfile(self.path) and not await self.is_link()
async def is_link(self) -> bool:
return await path.islink(self.path)
async def exists(self) -> bool:
return await path.exists(self.path)
async def read_text_content(
self, max_bytes: int = MAX_FILE_READ_SIZE, encoding: str = "utf-8", errors=None
) -> str:
file_size = await self.get_size()
read_size = min(max_bytes, file_size)
try:
async with aiofiles.open(
file=self.path, mode="r", encoding=encoding, errors=errors
) as file:
return await file.read(read_size)
except Exception: # noqa
return "Unknown"
class ResponseInfo:
def __init__(self, response: httpx.Response) -> None:
self._response = response
self.headers: httpx.Headers = response.headers
self.status_code: int = response.status_code
self.content_length: Optional[int] = self._get_content_length(self.headers)
self.content_type: Optional[str] = self._get_content_type(self.headers)
self.accept_ranges: bool = self._does_accept_ranges(self.headers)
self.etag: Optional[ETag] = self._get_etag(self.headers)
@property
def response(self) -> httpx.Response:
return self._response
def supports_resume(self) -> bool:
return bool(self.accept_ranges)
@staticmethod
def _does_accept_ranges(headers: httpx.Headers) -> bool:
# 'Accept-Ranges' indicates if the source accepts range requests,
# that let you retrieve a part of the response
accept_ranges_value = headers.get(
ACCEPT_RANGES_HEADER, ACCEPT_RANGES_NONE_VALUE
)
does_accept_ranges = accept_ranges_value != ACCEPT_RANGES_NONE_VALUE
return does_accept_ranges
@staticmethod
def _get_content_length(headers: httpx.Headers) -> Optional[int]:
content_length = headers.get(CONTENT_LENGTH_HEADER)
if content_length is not None:
return int(content_length)
return content_length
@staticmethod
def _get_content_type(headers: httpx.Headers) -> Optional[str]:
return headers.get(CONTENT_TYPE_HEADER)
@staticmethod
def _get_etag(headers: httpx.Headers) -> Optional[ETag]:
etag_header = headers.get(ETAG_HEADER)
if etag_header is None:
return etag_header
return ETag(etag_header)
class Status(Enum):
Success = auto()
DestinationAlreadyExists = auto()
DestinationFolderNotExists = auto()
DestinationNotAFile = auto()
DownloadError = auto()
DownloadErrorStatusCode = auto()
DownloadSizeMismatch = auto()
DownloadContentTypeMismatch = auto()
DownloadIndividualParts = auto()
SourceDoesNotSupportResume = auto()
StatusCode = auto()
async def check_target_file_status(
target_file: File, force_reload: bool, **kwargs: Any
) -> Status:
if not await target_file.directory_exists():
logger.error(
f"Folder {target_file.path} does not exists! Skip download."
)
return Status.DestinationFolderNotExists
if await target_file.exists() and not await target_file.is_file():
logger.error(
f"Object {target_file.path} exists but is not a file. Skip download."
)
return Status.DestinationNotAFile
if await target_file.is_file() and not force_reload:
logger.info(
f"File {target_file.path} already exists. Skip download."
)
return Status.DestinationAlreadyExists
return Status.Success
async def check_download_size(
tmp_file: File, target_file: File, head_response: ResponseInfo, **kwargs: Any
) -> Status:
tmp_file_size = await tmp_file.get_size()
content_length = head_response.content_length
if tmp_file_size is not None and content_length is not None:
if tmp_file_size != content_length:
logger.error(
f"Error downloading {target_file.path}. File size missmatch. "
f"Expected size: {content_length}; Downloaded: {tmp_file_size}"
)
return Status.DownloadSizeMismatch
return Status.Success
async def check_status_code(
response: ResponseInfo, tmp_file: File, target_file: File, **kwargs: Any
) -> Status:
if not 200 <= response.status_code < 400:
content = await tmp_file.read_text_content()
logger.error(
f"Error downloading {target_file.path}. Message: {content}"
)
return Status.StatusCode
return Status.Success
async def check_content_type(
response: ResponseInfo, target_file: File, tmp_file: File,
expected_types: List[str], **kwargs: Any
) -> Status:
if not expected_types:
return Status.Success
if response.content_type not in expected_types:
content = await tmp_file.read_text_content()
logger.error(
f"Error downloading {target_file.path}. Wrong content type. "
f"Expected type(s): {expected_types}; "
f"Got: {response.content_type}; Message: {content}"
)
return Status.DownloadContentTypeMismatch
return Status.Success
def _status_for_message(message: str) -> Status:
if "please download individual parts" in message:
return Status.DownloadIndividualParts
return Status.Success
async def check_status_for_message(
response: ResponseInfo, tmp_file: File, **kwargs: Any
) -> Status:
if response.content_type and "text" in response.content_type:
length = response.content_length or await tmp_file.get_size()
if length <= MAX_FILE_READ_SIZE:
message = await tmp_file.read_text_content()
return _status_for_message(message)
return Status.Success
class DownloadResult(NamedTuple):
status: Status
destination: File
head_response: Optional[ResponseInfo]
response: Optional[ResponseInfo]
message: Optional[str]
class DummyProgressBar:
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
pass
def update(self, *args, **kwargs):
pass
def get_progressbar(
destination: pathlib.Path, total: Optional[int], start: int = 0
) -> Union[tqdm.tqdm, DummyProgressBar]:
if total is None:
return DummyProgressBar()
description = click.format_filename(destination, shorten=True)
progressbar = tqdm.tqdm(
desc=description,
total=total,
unit="B",
unit_scale=True,
unit_divisor=1024
)
if start > 0:
progressbar.update(start)
return progressbar
class Downloader:
MIN_STREAM_LENGTH = 10*1024*1024 # using stream mode if source is greater than
MIN_RESUME_FILE_LENGTH = 10*1024*1024 # keep resume file if file is greater than
RESUME_SUFFIX = ".resume"
TMP_SUFFIX = ".tmp"
def __init__(
self,
source: httpx.URL,
client: httpx.AsyncClient,
expected_types: Optional[Union[List[str], str]] = None,
additional_headers: Optional[Dict[str, str]] = None
) -> None:
self._source = source
self._client = client
self._expected_types = self._normalize_expected_types(expected_types)
self._additional_headers = self._normalize_headers(additional_headers)
self._head_request: Optional[ResponseInfo] = None
@staticmethod
def _normalize_expected_types(
expected_types: Optional[Union[List[str], str]]
) -> List[str]:
if not isinstance(expected_types, list):
if expected_types is None:
expected_types = []
else:
expected_types = [expected_types]
return expected_types
@staticmethod
def _normalize_headers(headers: Optional[Dict[str, str]]) -> Dict[str, str]:
if headers is None:
return {}
return headers
async def get_head_response(self, force_recreate: bool = False) -> ResponseInfo:
if self._head_request is None or force_recreate:
# switched from HEAD to GET request without loading the body
# HEAD request to cds.audible.de will responded in 1 - 2 minutes
# a GET request to the same URI will take ~4-6 seconds
async with self._client.stream(
"GET", self._source, headers=self._additional_headers,
follow_redirects=True,
) as head_response:
if head_response.request.url != self._source:
self._source = head_response.request.url
self._head_request = ResponseInfo(head_response)
return self._head_request
async def _determine_resume_file(self, target_file: File) -> File:
head_response = await self.get_head_response()
etag = head_response.etag
if etag is None:
resume_name = target_file.path
else:
parsed_etag = etag.parsed_etag
resume_name = target_file.path.with_name(parsed_etag)
resume_file = resume_name.with_suffix(self.RESUME_SUFFIX)
return File(resume_file)
def _determine_tmp_file(self, target_file: File) -> File:
tmp_file = pathlib.Path(target_file.path).with_suffix(self.TMP_SUFFIX)
return File(tmp_file)
async def _handle_tmp_file(
self, tmp_file: File, supports_resume: bool, response: ResponseInfo
) -> None:
tmp_file_size = await tmp_file.get_size()
expected_size = response.content_length
if (
supports_resume and expected_size is not None
and self.MIN_RESUME_FILE_LENGTH < tmp_file_size < expected_size
):
logger.debug(f"Keep resume file {tmp_file.path}")
else:
await tmp_file.remove()
@staticmethod
async def _rename_file(
tmp_file: File, target_file: File, force_reload: bool, response: ResponseInfo
) -> Status:
target_path = target_file.path
if await target_file.exists() and force_reload:
i = 0
while target_path.with_suffix(f"{target_path.suffix}.old.{i}").exists():
i += 1
target_path.rename(target_path.with_suffix(f"{target_path.suffix}.old.{i}"))
tmp_file.path.rename(target_path)
logger.info(
f"File {target_path} downloaded in {response.response.elapsed}."
)
return Status.Success
@staticmethod
async def _check_and_return_download_result(
status_check_func: Callable,
tmp_file: File,
target_file: File,
response: ResponseInfo,
head_response: ResponseInfo,
expected_types: List[str]
) -> Optional[DownloadResult]:
status = await status_check_func(
response=response,
tmp_file=tmp_file,
target_file=target_file,
expected_types=expected_types
)
if status != Status.Success:
message = await tmp_file.read_text_content()
return DownloadResult(
status=status,
destination=target_file,
head_response=head_response,
response=response,
message=message
)
return None
async def _postprocessing(
self, tmp_file: File, target_file: File, response: ResponseInfo,
force_reload: bool
) -> DownloadResult:
head_response = await self.get_head_response()
status_checks = [
check_status_for_message,
check_status_code,
check_status_code,
check_content_type
]
for check in status_checks:
result = await self._check_and_return_download_result(
check, tmp_file, target_file, response,
head_response, self._expected_types
)
if result:
return result
await self._rename_file(
tmp_file=tmp_file,
target_file=target_file,
force_reload=force_reload,
response=response,
)
return DownloadResult(
status=Status.Success,
destination=target_file,
head_response=head_response,
response=response,
message=None
)
async def _stream_download(
self,
tmp_file: File,
target_file: File,
start: int,
progressbar: Union[tqdm.tqdm, DummyProgressBar],
force_reload: bool = True
) -> DownloadResult:
headers = self._additional_headers.copy()
if start > 0:
headers.update(Range=f"bytes={start}-")
file_mode: FileMode = "ab"
else:
file_mode: FileMode = "wb"
async with self._client.stream(
method="GET", url=self._source, follow_redirects=True, headers=headers
) as response:
with progressbar:
async with aiofiles.open(tmp_file.path, mode=file_mode) as file:
async for chunk in response.aiter_bytes():
await file.write(chunk)
progressbar.update(len(chunk))
return await self._postprocessing(
tmp_file=tmp_file,
target_file=target_file,
response=ResponseInfo(response=response),
force_reload=force_reload
)
async def _download(
self, tmp_file: File, target_file: File, start: int, force_reload: bool
) -> DownloadResult:
headers = self._additional_headers.copy()
if start > 0:
headers.update(Range=f"bytes={start}-")
file_mode: FileMode = "ab"
else:
file_mode: FileMode = "wb"
response = await self._client.get(
self._source, follow_redirects=True, headers=headers
)
async with aiofiles.open(tmp_file.path, mode=file_mode) as file:
await file.write(response.content)
return await self._postprocessing(
tmp_file=tmp_file,
target_file=target_file,
response=ResponseInfo(response=response),
force_reload=force_reload
)
async def run(
self,
target: pathlib.Path,
force_reload: bool = False
) -> DownloadResult:
target_file = File(target)
destination_status = await check_target_file_status(
target_file, force_reload
)
if destination_status != Status.Success:
return DownloadResult(
status=destination_status,
destination=target_file,
head_response=None,
response=None,
message=None
)
head_response = await self.get_head_response()
supports_resume = head_response.supports_resume()
if supports_resume:
tmp_file = await self._determine_resume_file(target_file=target_file)
start = await tmp_file.get_size()
else:
tmp_file = self._determine_tmp_file(target_file=target_file)
await tmp_file.remove()
start = 0
should_stream = False
progressbar = None
if (
head_response.content_length is not None and
head_response.content_length >= self.MIN_STREAM_LENGTH
):
should_stream = True
progressbar = get_progressbar(
target_file.path, head_response.content_length, start
)
try:
if should_stream:
return await self._stream_download(
tmp_file=tmp_file,
target_file=target_file,
start=start,
progressbar=progressbar,
force_reload=force_reload
)
else:
return await self._download(
tmp_file=tmp_file,
target_file=target_file,
start=start,
force_reload=force_reload
)
finally:
await self._handle_tmp_file(
tmp_file=tmp_file,
supports_resume=supports_resume,
response=head_response
)

View file

@ -6,7 +6,6 @@ import unicodedata
from datetime import datetime
from math import ceil
from typing import List, Optional, Union
from warnings import warn
import audible
import httpx
@ -132,17 +131,9 @@ class BaseItem:
return True
def is_published(self):
if (
self.content_delivery_type and self.content_delivery_type == "AudioPart"
and self._parent
):
publication_datetime = self._parent.publication_datetime
else:
publication_datetime = self.publication_datetime
if publication_datetime is not None:
if self.publication_datetime is not None:
pub_date = datetime.strptime(
publication_datetime, "%Y-%m-%dT%H:%M:%SZ"
self.publication_datetime, "%Y-%m-%dT%H:%M:%SZ"
)
now = datetime.utcnow()
return now > pub_date
@ -392,21 +383,15 @@ class LibraryItem(BaseItem):
return lr
async def get_content_metadata(
self, quality: str = "high", chapter_type: str = "Tree", **request_kwargs
):
chapter_type = chapter_type.capitalize()
async def get_content_metadata(self, quality: str = "high"):
assert quality in ("best", "high", "normal",)
assert chapter_type in ("Flat", "Tree")
url = f"content/{self.asin}/metadata"
params = {
"response_groups": "last_position_heard, content_reference, "
"chapter_info",
"quality": "High" if quality in ("best", "high") else "Normal",
"drm_type": "Adrm",
"chapter_titles_type": chapter_type,
**request_kwargs
"drm_type": "Adrm"
}
metadata = await self._client.get(url, params=params)
@ -604,18 +589,6 @@ class Library(BaseList):
self,
start_date: Optional[datetime] = None,
end_date: Optional[datetime] = None
):
warn(
"resolve_podcats is deprecated, use resolve_podcasts instead",
DeprecationWarning,
stacklevel=2
)
return self.resolve_podcasts(start_date, end_date)
async def resolve_podcasts(
self,
start_date: Optional[datetime] = None,
end_date: Optional[datetime] = None
):
podcast_items = await asyncio.gather(
*[i.get_child_items(start_date=start_date, end_date=end_date)
@ -681,7 +654,7 @@ class Catalog(BaseList):
return cls(resp, api_client=api_client)
async def resolve_podcasts(self):
async def resolve_podcats(self):
podcast_items = await asyncio.gather(
*[i.get_child_items() for i in self if i.is_parent_podcast()]
)

View file

@ -28,49 +28,39 @@ def from_folder(plugin_dir: Union[str, pathlib.Path]):
"""
def decorator(group):
if not isinstance(group, click.Group):
raise TypeError(
"Plugins can only be attached to an instance of click.Group()"
)
raise TypeError("Plugins can only be attached to an instance of "
"click.Group()")
plugin_path = pathlib.Path(plugin_dir).resolve()
sys.path.insert(0, str(plugin_path))
pdir = pathlib.Path(plugin_dir)
cmds = [x for x in pdir.glob("cmd_*.py")]
sys.path.insert(0, str(pdir.resolve()))
for cmd_path in plugin_path.glob("cmd_*.py"):
cmd_path_stem = cmd_path.stem
for cmd in cmds:
mod_name = cmd.stem
try:
mod = import_module(cmd_path_stem)
cmd = mod.cli
if cmd.name == "cli":
# if no name given to the command, use the filename
# excl. starting cmd_ as name
cmd.name = cmd_path_stem[4:]
group.add_command(cmd)
orig_help = cmd.help or ""
new_help = (
f"(P) {orig_help}\n\nPlugin loaded from file: {str(cmd_path)}"
)
cmd.help = new_help
mod = import_module(mod_name)
name = mod_name[4:] if mod.cli.name == "cli" else mod.cli.name
group.add_command(mod.cli, name=name)
except Exception: # noqa
# Catch this so a busted plugin doesn't take down the CLI.
# Handled by registering a dummy command that does nothing
# other than explain the error.
group.add_command(BrokenCommand(cmd_path_stem[4:]))
group.add_command(BrokenCommand(mod_name[4:]))
return group
return decorator
def from_entry_point(entry_point_group):
def from_entry_point(entry_point_group: str):
"""
A decorator to register external CLI commands to an instance of
`click.Group()`.
Parameters
----------
entry_point_group : list
A list producing one `pkg_resources.EntryPoint()` per iteration.
entry_point_group : iter
An iterable producing one `pkg_resources.EntryPoint()` per iteration.
Returns
-------
@ -78,23 +68,13 @@ def from_entry_point(entry_point_group):
"""
def decorator(group):
if not isinstance(group, click.Group):
raise TypeError(
"Plugins can only be attached to an instance of click.Group()"
)
print(type(group))
raise TypeError("Plugins can only be attached to an instance of "
"click.Group()")
for entry_point in entry_point_group or ():
try:
cmd = entry_point.load()
dist_name = entry_point.dist.name
if cmd.name == "cli":
# if no name given to the command, use the filename
# excl. starting cmd_ as name
cmd.name = dist_name
group.add_command(cmd)
orig_help = cmd.help or ""
new_help = f"(P) {orig_help}\n\nPlugin loaded from package: {dist_name}"
cmd.help = new_help
group.add_command(entry_point.load())
except Exception: # noqa
# Catch this so a busted plugin doesn't take down the CLI.
# Handled by registering a dummy command that does nothing