Compare commits

...

34 Commits

Author SHA1 Message Date
742632ed8d Bump version to 3.2.0 2021-08-04 18:27:26 +00:00
544d45cbc5 Catch non-critical exceptions at crawler top level 2021-07-13 15:42:11 +02:00
86f79ff1f1 Update changelog 2021-07-07 15:23:58 +02:00
ee67f9f472 Sort elements by ILIAS id to ensure deterministic ordering 2021-07-06 17:45:48 +02:00
8ec3f41251 Crawl ilias booking objects as links 2021-07-06 16:15:25 +02:00
89be07d4d3 Use final crawl path in HTML parsing message 2021-07-03 17:05:48 +02:00
91200f3684 Fix nondeterministic name deduplication 2021-07-03 12:09:55 +02:00
9ffd603357 Error when using multiple segments with -name->
Previously, PFERD just silently never matched the -name-> arrow. Now, it errors
when loading the config file.
2021-07-01 11:14:50 +02:00
80eeb8fe97 Add --skip option 2021-07-01 11:02:21 +02:00
75fde870c2 Bump version to 3.1.0 2021-06-13 17:23:18 +02:00
6e4d423c81 Crawl all video stages in one crawl bar
This ensures folders are not renamed, as they are crawled twice
2021-06-13 17:18:45 +02:00
57aef26217 Fix name arrows
I seem to have (re-)implemented them incorrectly and never tested them.
2021-06-13 16:33:29 +02:00
70ec64a48b Fix wrong base URL for multi-stage pages 2021-06-13 15:44:47 +02:00
70b33ecfd9 Add migration notes to changelog
Also clean up some other formatting for consistency
2021-06-13 15:06:50 +02:00
601e4b936b Use new arrow logic in README example config 2021-06-12 15:00:52 +02:00
a292c4c437 Add example for ">>" arrow heads 2021-06-12 14:57:29 +02:00
bc65ea7ab6 Fix mypy complaining about missing type hints 2021-06-09 22:45:52 +02:00
f28bbe6b0c Update transform rule documentation
It's still missing an example that uses rules with ">>" arrows.
2021-06-09 22:45:52 +02:00
61d902d715 Overhaul transform logic
-re-> arrows now rename their parent directories (like -->) and don't require a
full match (like -exact->). Their old behaviour is available as -exact-re->.

Also, this change adds the ">>" arrow head, which modifies the current path and
continues to the next rule when it matches.
2021-06-09 22:45:52 +02:00
8ab462fb87 Use the exercise label instead of the button name as path 2021-06-04 19:24:23 +02:00
df3ad3d890 Add 'skip' option to crawlers 2021-06-04 18:47:13 +02:00
fc31100a0f Always use '/' as path separator for regex rules
Previously, regex-matching paths on windows would, in some cases, require four
backslashes ('\\\\') to escape a single path separator. That's just too much.

With this commit, regex transforms now use '/' instead of '\' as path separator,
meaning rules can more easily be shared between platforms (although they are not
guaranteed to be 100% compatible since on Windows, '\' is still recognized as a
path separator).

To make rules more intuitive to write, local relative paths are now also printed
with '/' as path separator on Windows. Since Windows also accepts '/' as path
separator, this change doesn't really affect other rules that parse their sides
as paths.
2021-06-04 18:12:45 +02:00
31b6311e99 Remove incorrect tmp file explain message 2021-06-01 19:03:06 +02:00
1fc8e9eb7a Document credential file authenticator config options 2021-06-01 10:01:14 +00:00
85b9f45085 Bump version to 3.0.1 2021-06-01 09:49:30 +00:00
f656e3ff34 Fix credential parsing 2021-06-01 09:18:17 +00:00
e1bda94329 Load credential file from correct path 2021-06-01 09:18:08 +00:00
f6b26f4ead Fix unexpected exception when credential file not found 2021-06-01 09:10:58 +00:00
722970a255 Store cookies in text-based format
Using the stdlib's http.cookie module, cookies are now stored as one
"Set-Cookie" header per line. Previously, the aiohttp.CookieJar's save() and
load() methods were used (which use pickling).
2021-05-31 20:18:20 +00:00
f40820c41f Warn if using concurrent tasks with kit-ilias-web 2021-05-31 20:18:20 +00:00
49ad1b6e46 Clean up authenticator code formatting 2021-05-31 18:45:06 +02:00
1ce32d2f18 Add CLI option for credential file auth to kit-ilias-web 2021-05-31 18:45:06 +02:00
9d5ec84b91 Add credential file authenticator 2021-05-31 18:33:34 +02:00
1fba96abcb Fix exercise date parsing for non-group submissions
ILIAS apparently changes the order of the fields as it sees fit, so we
now try to parse *every* column, starting at from the right, as a date.
The first column that parses successfully is then used.
2021-05-31 18:15:12 +02:00
24 changed files with 867 additions and 425 deletions

View File

@ -22,6 +22,56 @@ ambiguous situations.
## Unreleased ## Unreleased
## 3.2.0 - 2021-08-04
### Added
- `--skip` command line option
- Support for ILIAS booking objects
### Changed
- Using multiple path segments on left side of `-name->` now results in an
error. This was already forbidden by the documentation but silently accepted
by PFERD.
- More consistent path printing in some `--explain` messages
### Fixed
- Nondeterministic name deduplication due to ILIAS reordering elements
- More exceptions are handled properly
## 3.1.0 - 2021-06-13
If your config file doesn't do weird things with transforms, it should continue
to work. If your `-re->` arrows behave weirdly, try replacing them with
`-exact-re->` arrows. If you're on Windows, you might need to switch from `\`
path separators to `/` in your regex rules.
### Added
- `skip` option for crawlers
- Rules with `>>` instead of `>` as arrow head
- `-exact-re->` arrow (behaves like `-re->` did previously)
### Changed
- The `-re->` arrow can now rename directories (like `-->`)
- Use `/` instead of `\` as path separator for (regex) rules on Windows
- Use the label to the left for exercises instead of the button name to
determine the folder name
### Fixed
- Video pagination handling in ILIAS crawler
## 3.0.1 - 2021-06-01
### Added
- `credential-file` authenticator
- `--credential-file` option for `kit-ilias-web` command
- Warning if using concurrent tasks with `kit-ilias-web`
### Changed
- Cookies are now stored in a text-based format
### Fixed
- Date parsing now also works correctly in non-group exercises
## 3.0.0 - 2021-05-31 ## 3.0.0 - 2021-05-31
### Added ### Added

160
CONFIG.md
View File

@ -49,6 +49,9 @@ see the type's [documentation](#crawler-types) below. The following options are
common to all crawlers: common to all crawlers:
- `type`: The available types are specified in [this section](#crawler-types). - `type`: The available types are specified in [this section](#crawler-types).
- `skip`: Whether the crawler should be skipped during normal execution. The
crawler can still be executed manually using the `--crawler` or `-C` flags.
(Default: `no`)
- `output_dir`: The directory the crawler synchronizes files to. A crawler will - `output_dir`: The directory the crawler synchronizes files to. A crawler will
never place any files outside of this directory. (Default: the crawler's name) never place any files outside of this directory. (Default: the crawler's name)
- `redownload`: When to download a file that is already present locally. - `redownload`: When to download a file that is already present locally.
@ -180,6 +183,22 @@ via the terminal.
- `username`: The username. (Optional) - `username`: The username. (Optional)
- `password`: The password. (Optional) - `password`: The password. (Optional)
### The `credential-file` authenticator
This authenticator reads a username and a password from a credential file.
- `path`: Path to the credential file. (Required)
The credential file has exactly two lines (trailing newline optional). The first
line starts with `username=` and contains the username, the second line starts
with `password=` and contains the password. The username and password may
contain any characters except a line break.
```
username=AzureDiamond
password=hunter2
```
### The `keyring` authenticator ### The `keyring` authenticator
This authenticator uses the system keyring to store passwords. The username can This authenticator uses the system keyring to store passwords. The username can
@ -203,56 +222,87 @@ This authenticator does not support usernames.
Transformation rules are rules for renaming and excluding files and directories. Transformation rules are rules for renaming and excluding files and directories.
They are specified line-by-line in a crawler's `transform` option. When a They are specified line-by-line in a crawler's `transform` option. When a
crawler needs to apply a rule to a path, it goes through this list top-to-bottom crawler needs to apply a rule to a path, it goes through this list top-to-bottom
and choose the first matching rule. and applies the first matching rule.
To see this process in action, you can use the `--debug-transforms` or flag or To see this process in action, you can use the `--debug-transforms` or flag or
the `--explain` flag. the `--explain` flag.
Each line has the format `SOURCE ARROW TARGET` where `TARGET` is optional. Each rule has the format `SOURCE ARROW TARGET` (e. g. `foo/bar --> foo/baz`).
`SOURCE` is either a normal path without spaces (e. g. `foo/bar`), or a string The arrow specifies how the source and target are interpreted. The different
literal delimited by `"` or `'` (e. g. `"foo\" bar/baz"`). Python's string kinds of arrows are documented below.
escape syntax is supported. Trailing slashes are ignored. `TARGET` can be
formatted like `SOURCE`, but it can also be a single exclamation mark without
quotes (`!`). `ARROW` is one of `-->`, `-name->`, `-exact->`, `-re->` and
`-name-re->`
If a rule's target is `!`, this means that when the rule matches on a path, the `SOURCE` and `TARGET` are either a bunch of characters without spaces (e. g.
corresponding file or directory is ignored. If a rule's target is missing, the `foo/bar`) or string literals (e. g, `"foo/b a r"`). The former syntax has no
path is matched but not modified. concept of escaping characters, so the backslash is just another character. The
string literals however support Python's escape syntax (e. g.
`"foo\\bar\tbaz"`). This also means that in string literals, backslashes must be
escaped.
`TARGET` can additionally be a single exclamation mark `!` (*not* `"!"`). When a
rule with a `!` as target matches a path, the corresponding file or directory is
ignored by the crawler instead of renamed.
`TARGET` can also be omitted entirely. When a rule without target matches a
path, the path is returned unmodified. This is useful to prevent rules further
down from matching instead.
Each arrow's behaviour can be modified slightly by changing the arrow's head
from `>` to `>>`. When a rule with a `>>` arrow head matches a path, it doesn't
return immediately like a normal arrow. Instead, it replaces the current path
with its output and continues on to the next rule. In effect, this means that
multiple rules can be applied sequentially.
### The `-->` arrow ### The `-->` arrow
The `-->` arrow is a basic renaming operation. If a path begins with `SOURCE`, The `-->` arrow is a basic renaming operation for files and directories. If a
that part of the path is replaced with `TARGET`. This means that the rule path matches `SOURCE`, it is renamed to `TARGET`.
`foo/bar --> baz` would convert `foo/bar` into `baz`, but also `foo/bar/xyz`
into `baz/xyz`. The rule `foo --> !` would ignore a directory named `foo` as Example: `foo/bar --> baz`
well as all its contents. - Doesn't match `foo`, `a/foo/bar` or `foo/baz`
- Converts `foo/bar` into `baz`
- Converts `foo/bar/wargl` into `bar/wargl`
Example: `foo/bar --> !`
- Doesn't match `foo`, `a/foo/bar` or `foo/baz`
- Ignores `foo/bar` and any of its children
### The `-name->` arrow ### The `-name->` arrow
The `-name->` arrow lets you rename files and directories by their name, The `-name->` arrow lets you rename files and directories by their name,
regardless of where they appear in the file tree. Because of this, its `SOURCE` regardless of where they appear in the file tree. Because of this, its `SOURCE`
must not contain multiple path segments, only a single name. This restriction must not contain multiple path segments, only a single name. This restriction
does not apply to its `TARGET`. The `-name->` arrow is not applied recursively does not apply to its `TARGET`.
to its own output to prevent infinite loops.
For example, the rule `foo -name-> bar/baz` would convert `a/foo` into Example: `foo -name-> bar/baz`
`a/bar/baz` and `a/foo/b/c/foo` into `a/bar/baz/b/c/bar/baz`. The rule `foo - Doesn't match `a/foobar/b` or `x/Foo/y/z`
-name-> !` would ignore all directories and files named `foo`. - Converts `hello/foo` into `hello/bar/baz`
- Converts `foo/world` into `bar/baz/world`
- Converts `a/foo/b/c/foo` into `a/bar/baz/b/c/bar/baz`
Example: `foo -name-> !`
- Doesn't match `a/foobar/b` or `x/Foo/y/z`
- Ignores any path containing a segment `foo`
### The `-exact->` arrow ### The `-exact->` arrow
The `-exact->` arrow requires the path to match `SOURCE` exactly. This means The `-exact->` arrow requires the path to match `SOURCE` exactly. The examples
that the rule `foo/bar -exact-> baz` would still convert `foo/bar` into `baz`, below show why this is useful.
but `foo/bar/xyz` would be unaffected. Also, `foo -exact-> !` would only ignore
`foo`, but not its contents (if it has any). The examples below show why this is Example: `foo/bar -exact-> baz`
useful. - Doesn't match `foo`, `a/foo/bar` or `foo/baz`
- Converts `foo/bar` into `baz`
- Doesn't match `foo/bar/wargl`
Example: `foo/bar -exact-> !`
- Doesn't match `foo`, `a/foo/bar` or `foo/baz`
- Ignores only `foo/bar`, not its children
### The `-re->` arrow ### The `-re->` arrow
The `-re->` arrow uses regular expressions. `SOURCE` is a regular expression The `-re->` arrow is like the `-->` arrow but with regular expressions. `SOURCE`
that must match the entire path. If this is the case, then the capturing groups is a regular expression and `TARGET` an f-string based template. If a path
are available in `TARGET` for formatting. matches `SOURCE`, the output path is created using `TARGET` as template.
`SOURCE` is automatically anchored.
`TARGET` uses Python's [format string syntax][3]. The *n*-th capturing group can `TARGET` uses Python's [format string syntax][3]. The *n*-th capturing group can
be referred to as `{g<n>}` (e. g. `{g3}`). `{g0}` refers to the original path. be referred to as `{g<n>}` (e. g. `{g3}`). `{g0}` refers to the original path.
@ -269,18 +319,37 @@ can use `{i3:05}`.
PFERD even allows you to write entire expressions inside the curly braces, for PFERD even allows you to write entire expressions inside the curly braces, for
example `{g2.lower()}` or `{g3.replace(' ', '_')}`. example `{g2.lower()}` or `{g3.replace(' ', '_')}`.
Example: `f(oo+)/be?ar -re-> B{g1.upper()}H/fear`
- Doesn't match `a/foo/bar`, `foo/abc/bar`, `afoo/bar` or `foo/bars`
- Converts `foo/bar` into `BOOH/fear`
- Converts `fooooo/bear` into `BOOOOOH/fear`
- Converts `foo/bar/baz` into `BOOH/fear/baz`
[3]: <https://docs.python.org/3/library/string.html#format-string-syntax> "Format String Syntax" [3]: <https://docs.python.org/3/library/string.html#format-string-syntax> "Format String Syntax"
### The `-name-re->` arrow ### The `-name-re->` arrow
The `-name-re>` arrow is like a combination of the `-name->` and `-re->` arrows. The `-name-re>` arrow is like a combination of the `-name->` and `-re->` arrows.
Instead of the `SOURCE` being the name of a directory or file, it's a regex that
is matched against the names of directories and files. `TARGET` works like the
`-re->` arrow's target.
For example, the arrow `(.*)\.jpeg -name-re-> {g1}.jpg` will rename all `.jpeg` Example: `(.*)\.jpeg -name-re-> {g1}.jpg`
extensions into `.jpg`. The arrow `\..+ -name-re-> !` will ignore all files and - Doesn't match `foo/bar.png`, `baz.JPEG` or `hello,jpeg`
directories starting with `.`. - Converts `foo/bar.jpeg` into `foo/bar.jpg`
- Converts `foo.jpeg/bar/baz.jpeg` into `foo.jpg/bar/baz.jpg`
Example: `\..+ -name-re-> !`
- Doesn't match `.`, `test`, `a.b`
- Ignores all files and directories starting with `.`.
### The `-exact-re->` arrow
The `-exact-re>` arrow is like a combination of the `-exact->` and `-re->`
arrows.
Example: `f(oo+)/be?ar -exactre-> B{g1.upper()}H/fear`
- Doesn't match `a/foo/bar`, `foo/abc/bar`, `afoo/bar` or `foo/bars`
- Converts `foo/bar` into `BOOH/fear`
- Converts `fooooo/bear` into `BOOOOOH/fear`
- Doesn't match `foo/bar/baz`
### Example: Tutorials ### Example: Tutorials
@ -307,8 +376,7 @@ tutorials --> !
The second rule is required for many crawlers since they use the rules to decide The second rule is required for many crawlers since they use the rules to decide
which directories to crawl. If it was missing when the crawler looks at which directories to crawl. If it was missing when the crawler looks at
`tutorials/`, the third rule would match. This means the crawler would not crawl `tutorials/`, the third rule would match. This means the crawler would not crawl
the `tutorials/` directory and thus not discover that `tutorials/tut02/` the `tutorials/` directory and thus not discover that `tutorials/tut02/` exists.
existed.
Since the second rule is only relevant for crawling, the `TARGET` is left out. Since the second rule is only relevant for crawling, the `TARGET` is left out.
@ -333,9 +401,9 @@ To do this, you can use the most powerful of arrows: The regex arrow.
Note the escaped backslashes on the `SOURCE` side. Note the escaped backslashes on the `SOURCE` side.
### Example: Crawl a python project ### Example: Crawl a Python project
You are crawling a python project and want to ignore all hidden files (files You are crawling a Python project and want to ignore all hidden files (files
whose name starts with a `.`), all `__pycache__` directories and all markdown whose name starts with a `.`), all `__pycache__` directories and all markdown
files (for some weird reason). files (for some weird reason).
@ -355,11 +423,21 @@ README.md
... ...
``` ```
For this task, the name arrows can be used. They are variants of the normal For this task, the name arrows can be used.
arrows that only look at the file name instead of the entire path.
``` ```
\..* -name-re-> ! \..* -name-re-> !
__pycache__ -name-> ! __pycache__ -name-> !
.*\.md -name-re-> ! .*\.md -name-re-> !
``` ```
### Example: Clean up names
You want to convert all paths into lowercase and replace spaces with underscores
before applying any rules. This can be achieved using the `>>` arrow heads.
```
(.*) -re->> "{g1.lower().replace(' ', '_')}"
<other rules go here>
```

View File

@ -5,7 +5,8 @@ import os
import sys import sys
from pathlib import Path from pathlib import Path
from .cli import PARSER, load_default_section from .auth import AuthLoadError
from .cli import PARSER, ParserLoadError, load_default_section
from .config import Config, ConfigDumpError, ConfigLoadError, ConfigOptionError from .config import Config, ConfigDumpError, ConfigLoadError, ConfigOptionError
from .logging import log from .logging import log
from .pferd import Pferd, PferdLoadError from .pferd import Pferd, PferdLoadError
@ -36,6 +37,9 @@ def load_config(args: argparse.Namespace) -> Config:
log.error(str(e)) log.error(str(e))
log.error_contd(e.reason) log.error_contd(e.reason)
sys.exit(1) sys.exit(1)
except ParserLoadError as e:
log.error(str(e))
sys.exit(1)
def configure_logging_from_args(args: argparse.Namespace) -> None: def configure_logging_from_args(args: argparse.Namespace) -> None:
@ -112,7 +116,7 @@ def main() -> None:
sys.exit() sys.exit()
try: try:
pferd = Pferd(config, args.crawler) pferd = Pferd(config, args.crawler, args.skip)
except PferdLoadError as e: except PferdLoadError as e:
log.unlock() log.unlock()
log.error(str(e)) log.error(str(e))
@ -131,7 +135,7 @@ def main() -> None:
loop.close() loop.close()
else: else:
asyncio.run(pferd.run(args.debug_transforms)) asyncio.run(pferd.run(args.debug_transforms))
except ConfigOptionError as e: except (ConfigOptionError, AuthLoadError) as e:
log.unlock() log.unlock()
log.error(str(e)) log.error(str(e))
sys.exit(1) sys.exit(1)
@ -143,7 +147,6 @@ def main() -> None:
log.unlock() log.unlock()
log.explain_topic("Interrupted, exiting immediately") log.explain_topic("Interrupted, exiting immediately")
log.explain("Open files and connections are left for the OS to clean up") log.explain("Open files and connections are left for the OS to clean up")
log.explain("Temporary files are not cleaned up")
pferd.print_report() pferd.print_report()
# TODO Clean up tmp files # TODO Clean up tmp files
# And when those files *do* actually get cleaned up properly, # And when those files *do* actually get cleaned up properly,

View File

@ -2,7 +2,8 @@ from configparser import SectionProxy
from typing import Callable, Dict from typing import Callable, Dict
from ..config import Config from ..config import Config
from .authenticator import Authenticator, AuthError, AuthSection # noqa: F401 from .authenticator import Authenticator, AuthError, AuthLoadError, AuthSection # noqa: F401
from .credential_file import CredentialFileAuthenticator, CredentialFileAuthSection
from .keyring import KeyringAuthenticator, KeyringAuthSection from .keyring import KeyringAuthenticator, KeyringAuthSection
from .simple import SimpleAuthenticator, SimpleAuthSection from .simple import SimpleAuthenticator, SimpleAuthSection
from .tfa import TfaAuthenticator from .tfa import TfaAuthenticator
@ -14,10 +15,12 @@ AuthConstructor = Callable[[
], Authenticator] ], Authenticator]
AUTHENTICATORS: Dict[str, AuthConstructor] = { AUTHENTICATORS: Dict[str, AuthConstructor] = {
"credential-file": lambda n, s, c:
CredentialFileAuthenticator(n, CredentialFileAuthSection(s), c),
"keyring": lambda n, s, c:
KeyringAuthenticator(n, KeyringAuthSection(s)),
"simple": lambda n, s, c: "simple": lambda n, s, c:
SimpleAuthenticator(n, SimpleAuthSection(s)), SimpleAuthenticator(n, SimpleAuthSection(s)),
"tfa": lambda n, s, c: "tfa": lambda n, s, c:
TfaAuthenticator(n), TfaAuthenticator(n),
"keyring": lambda n, s, c:
KeyringAuthenticator(n, KeyringAuthSection(s))
} }

View File

@ -13,14 +13,15 @@ class AuthError(Exception):
class AuthSection(Section): class AuthSection(Section):
pass def type(self) -> str:
value = self.s.get("type")
if value is None:
self.missing_value("type")
return value
class Authenticator(ABC): class Authenticator(ABC):
def __init__( def __init__(self, name: str) -> None:
self,
name: str
) -> None:
""" """
Initialize an authenticator from its name and its section in the config Initialize an authenticator from its name and its section in the config
file. file.

View File

@ -0,0 +1,44 @@
from pathlib import Path
from typing import Tuple
from ..config import Config
from ..utils import fmt_real_path
from .authenticator import Authenticator, AuthLoadError, AuthSection
class CredentialFileAuthSection(AuthSection):
def path(self) -> Path:
value = self.s.get("path")
if value is None:
self.missing_value("path")
return Path(value)
class CredentialFileAuthenticator(Authenticator):
def __init__(self, name: str, section: CredentialFileAuthSection, config: Config) -> None:
super().__init__(name)
path = config.default_section.working_dir() / section.path()
try:
with open(path) as f:
lines = list(f)
except OSError as e:
raise AuthLoadError(f"No credential file at {fmt_real_path(path)}") from e
if len(lines) != 2:
raise AuthLoadError("Credential file must be two lines long")
[uline, pline] = lines
uline = uline[:-1] # Remove trailing newline
if pline.endswith("\n"):
pline = pline[:-1]
if not uline.startswith("username="):
raise AuthLoadError("First line must start with 'username='")
if not pline.startswith("password="):
raise AuthLoadError("Second line must start with 'password='")
self._username = uline[9:]
self._password = pline[9:]
async def credentials(self) -> Tuple[str, str]:
return self._username, self._password

View File

@ -18,11 +18,7 @@ class KeyringAuthSection(AuthSection):
class KeyringAuthenticator(Authenticator): class KeyringAuthenticator(Authenticator):
def __init__( def __init__(self, name: str, section: KeyringAuthSection) -> None:
self,
name: str,
section: KeyringAuthSection,
) -> None:
super().__init__(name) super().__init__(name)
self._username = section.username() self._username = section.username()

View File

@ -14,11 +14,7 @@ class SimpleAuthSection(AuthSection):
class SimpleAuthenticator(Authenticator): class SimpleAuthenticator(Authenticator):
def __init__( def __init__(self, name: str, section: SimpleAuthSection) -> None:
self,
name: str,
section: SimpleAuthSection,
) -> None:
super().__init__(name) super().__init__(name)
self._username = section.username() self._username = section.username()

View File

@ -6,10 +6,7 @@ from .authenticator import Authenticator, AuthError
class TfaAuthenticator(Authenticator): class TfaAuthenticator(Authenticator):
def __init__( def __init__(self, name: str) -> None:
self,
name: str,
) -> None:
super().__init__(name) super().__init__(name)
async def username(self) -> str: async def username(self) -> str:

View File

@ -1,11 +1,12 @@
# isort: skip_file # isort: skip_file
# The order of imports matters because each command module registers itself # The order of imports matters because each command module registers itself
# with the parser from ".parser". Because of this, isort is disabled for this # with the parser from ".parser" and the import order affects the order in
# which they appear in the help. Because of this, isort is disabled for this
# file. Also, since we're reexporting or just using the side effect of # file. Also, since we're reexporting or just using the side effect of
# importing itself, we get a few linting warnings, which we're disabling as # importing itself, we get a few linting warnings, which we're disabling as
# well. # well.
from . import command_local # noqa: F401 imported but unused from . import command_local # noqa: F401 imported but unused
from . import command_kit_ilias_web # noqa: F401 imported but unused from . import command_kit_ilias_web # noqa: F401 imported but unused
from .parser import PARSER, load_default_section # noqa: F401 imported but unused from .parser import PARSER, ParserLoadError, load_default_section # noqa: F401 imported but unused

View File

@ -4,7 +4,8 @@ from pathlib import Path
from ..crawl.ilias.file_templates import Links from ..crawl.ilias.file_templates import Links
from ..logging import log from ..logging import log
from .parser import CRAWLER_PARSER, SUBPARSERS, BooleanOptionalAction, load_crawler, show_value_error from .parser import (CRAWLER_PARSER, SUBPARSERS, BooleanOptionalAction, ParserLoadError, load_crawler,
show_value_error)
SUBPARSER = SUBPARSERS.add_parser( SUBPARSER = SUBPARSERS.add_parser(
"kit-ilias-web", "kit-ilias-web",
@ -38,6 +39,12 @@ GROUP.add_argument(
action=BooleanOptionalAction, action=BooleanOptionalAction,
help="use the system keyring to store and retrieve passwords" help="use the system keyring to store and retrieve passwords"
) )
GROUP.add_argument(
"--credential-file",
type=Path,
metavar="PATH",
help="read username and password from a credential file"
)
GROUP.add_argument( GROUP.add_argument(
"--links", "--links",
type=show_value_error(Links.from_string), type=show_value_error(Links.from_string),
@ -88,11 +95,19 @@ def load(
parser["auth:ilias"] = {} parser["auth:ilias"] = {}
auth_section = parser["auth:ilias"] auth_section = parser["auth:ilias"]
if args.credential_file is not None:
if args.username is not None:
raise ParserLoadError("--credential-file and --username can't be used together")
if args.keyring:
raise ParserLoadError("--credential-file and --keyring can't be used together")
auth_section["type"] = "credential-file"
auth_section["path"] = str(args.credential_file)
elif args.keyring:
auth_section["type"] = "keyring"
else:
auth_section["type"] = "simple" auth_section["type"] = "simple"
if args.username is not None: if args.username is not None:
auth_section["username"] = args.username auth_section["username"] = args.username
if args.keyring:
auth_section["type"] = "keyring"
SUBPARSER.set_defaults(command=load) SUBPARSER.set_defaults(command=load)

View File

@ -8,6 +8,10 @@ from ..output_dir import OnConflict, Redownload
from ..version import NAME, VERSION from ..version import NAME, VERSION
class ParserLoadError(Exception):
pass
# TODO Replace with argparse version when updating to 3.9? # TODO Replace with argparse version when updating to 3.9?
class BooleanOptionalAction(argparse.Action): class BooleanOptionalAction(argparse.Action):
def __init__( def __init__(
@ -177,6 +181,14 @@ PARSER.add_argument(
help="only execute a single crawler." help="only execute a single crawler."
" Can be specified multiple times to execute multiple crawlers" " Can be specified multiple times to execute multiple crawlers"
) )
PARSER.add_argument(
"--skip", "-S",
action="append",
type=str,
metavar="NAME",
help="don't execute this particular crawler."
" Can be specified multiple times to skip multiple crawlers"
)
PARSER.add_argument( PARSER.add_argument(
"--working-dir", "--working-dir",
type=Path, type=Path,

View File

@ -69,6 +69,7 @@ class Section:
class DefaultSection(Section): class DefaultSection(Section):
def working_dir(self) -> Path: def working_dir(self) -> Path:
# TODO Change to working dir instead of manually prepending it to paths
pathstr = self.s.get("working_dir", ".") pathstr = self.s.get("working_dir", ".")
return Path(pathstr).expanduser() return Path(pathstr).expanduser()

View File

@ -3,7 +3,7 @@ from typing import Callable, Dict
from ..auth import Authenticator from ..auth import Authenticator
from ..config import Config from ..config import Config
from .crawler import Crawler, CrawlError # noqa: F401 from .crawler import Crawler, CrawlError, CrawlerSection # noqa: F401
from .ilias import KitIliasWebCrawler, KitIliasWebCrawlerSection from .ilias import KitIliasWebCrawler, KitIliasWebCrawlerSection
from .local_crawler import LocalCrawler, LocalCrawlerSection from .local_crawler import LocalCrawler, LocalCrawlerSection

View File

@ -56,7 +56,7 @@ def noncritical(f: Wrapped) -> Wrapped:
return wrapper # type: ignore return wrapper # type: ignore
AWrapped = TypeVar("AWrapped", bound=Callable[..., Awaitable[None]]) AWrapped = TypeVar("AWrapped", bound=Callable[..., Awaitable[Optional[Any]]])
def anoncritical(f: AWrapped) -> AWrapped: def anoncritical(f: AWrapped) -> AWrapped:
@ -72,14 +72,14 @@ def anoncritical(f: AWrapped) -> AWrapped:
Warning: Must only be applied to member functions of the Crawler class! Warning: Must only be applied to member functions of the Crawler class!
""" """
async def wrapper(*args: Any, **kwargs: Any) -> None: async def wrapper(*args: Any, **kwargs: Any) -> Optional[Any]:
if not (args and isinstance(args[0], Crawler)): if not (args and isinstance(args[0], Crawler)):
raise RuntimeError("@anoncritical must only applied to Crawler methods") raise RuntimeError("@anoncritical must only applied to Crawler methods")
crawler = args[0] crawler = args[0]
try: try:
await f(*args, **kwargs) return await f(*args, **kwargs)
except (CrawlWarning, OutputDirError, MarkDuplicateError, MarkConflictError) as e: except (CrawlWarning, OutputDirError, MarkDuplicateError, MarkConflictError) as e:
log.warn(str(e)) log.warn(str(e))
crawler.error_free = False crawler.error_free = False
@ -87,6 +87,8 @@ def anoncritical(f: AWrapped) -> AWrapped:
crawler.error_free = False crawler.error_free = False
raise raise
return None
return wrapper # type: ignore return wrapper # type: ignore
@ -132,6 +134,15 @@ class DownloadToken(ReusableAsyncContextManager[Tuple[ProgressBar, FileSink]]):
class CrawlerSection(Section): class CrawlerSection(Section):
def type(self) -> str:
value = self.s.get("type")
if value is None:
self.missing_value("type")
return value
def skip(self) -> bool:
return self.s.getboolean("skip", fallback=False)
def output_dir(self, name: str) -> Path: def output_dir(self, name: str) -> Path:
# TODO Use removeprefix() after switching to 3.9 # TODO Use removeprefix() after switching to 3.9
if name.startswith("crawl:"): if name.startswith("crawl:"):
@ -309,6 +320,7 @@ class Crawler(ABC):
log.explain("Warnings or errors occurred during this run") log.explain("Warnings or errors occurred during this run")
log.explain("Answer: No") log.explain("Answer: No")
@anoncritical
async def run(self) -> None: async def run(self) -> None:
""" """
Start the crawling process. Call this function if you want to use a Start the crawling process. Call this function if you want to use a

View File

@ -1,7 +1,8 @@
import asyncio import asyncio
import http.cookies
import ssl import ssl
from pathlib import Path, PurePath from pathlib import Path, PurePath
from typing import Dict, List, Optional from typing import Any, Dict, List, Optional
import aiohttp import aiohttp
import certifi import certifi
@ -105,6 +106,25 @@ class HttpCrawler(Crawler):
self._shared_cookie_jar_paths.append(self._cookie_jar_path) self._shared_cookie_jar_paths.append(self._cookie_jar_path)
def _load_cookies_from_file(self, path: Path) -> None:
jar: Any = http.cookies.SimpleCookie()
with open(path) as f:
for i, line in enumerate(f):
# Names of headers are case insensitive
if line[:11].lower() == "set-cookie:":
jar.load(line[11:])
else:
log.explain(f"Line {i} doesn't start with 'Set-Cookie:', ignoring it")
self._cookie_jar.update_cookies(jar)
def _save_cookies_to_file(self, path: Path) -> None:
jar: Any = http.cookies.SimpleCookie()
for morsel in self._cookie_jar:
jar[morsel.key] = morsel
with open(path, "w") as f:
f.write(jar.output(sep="\n"))
f.write("\n") # A trailing newline is just common courtesy
def _load_cookies(self) -> None: def _load_cookies(self) -> None:
log.explain_topic("Loading cookies") log.explain_topic("Loading cookies")
@ -134,7 +154,7 @@ class HttpCrawler(Crawler):
log.explain(f"Loading cookies from {fmt_real_path(cookie_jar_path)}") log.explain(f"Loading cookies from {fmt_real_path(cookie_jar_path)}")
try: try:
self._cookie_jar.load(cookie_jar_path) self._load_cookies_from_file(cookie_jar_path)
except Exception as e: except Exception as e:
log.explain("Failed to load cookies") log.explain("Failed to load cookies")
log.explain(str(e)) log.explain(str(e))
@ -144,7 +164,7 @@ class HttpCrawler(Crawler):
try: try:
log.explain(f"Saving cookies to {fmt_real_path(self._cookie_jar_path)}") log.explain(f"Saving cookies to {fmt_real_path(self._cookie_jar_path)}")
self._cookie_jar.save(self._cookie_jar_path) self._save_cookies_to_file(self._cookie_jar_path)
except Exception as e: except Exception as e:
log.warn(f"Failed to save cookies to {fmt_real_path(self._cookie_jar_path)}") log.warn(f"Failed to save cookies to {fmt_real_path(self._cookie_jar_path)}")
log.warn(str(e)) log.warn(str(e))

View File

@ -22,6 +22,7 @@ class IliasElementType(Enum):
FOLDER = "folder" FOLDER = "folder"
FORUM = "forum" FORUM = "forum"
LINK = "link" LINK = "link"
BOOKING = "booking"
MEETING = "meeting" MEETING = "meeting"
VIDEO = "video" VIDEO = "video"
VIDEO_PLAYER = "video_player" VIDEO_PLAYER = "video_player"
@ -37,6 +38,17 @@ class IliasPageElement:
mtime: Optional[datetime] = None mtime: Optional[datetime] = None
description: Optional[str] = None description: Optional[str] = None
def id(self) -> str:
regexes = [r"eid=(?P<id>[0-9a-z\-]+)", r"file_(?P<id>\d+)", r"ref_id=(?P<id>\d+)"]
for regex in regexes:
if match := re.search(regex, self.url):
return match.groupdict()["id"]
# Fall back to URL
log.warn(f"Didn't find identity for {self.name} - {self.url}. Please report this.")
return self.url
class IliasPage: class IliasPage:
@ -62,9 +74,11 @@ class IliasPage:
log.explain("Page is a normal folder, searching for elements") log.explain("Page is a normal folder, searching for elements")
return self._find_normal_entries() return self._find_normal_entries()
def get_next_stage_url(self) -> Optional[str]: def get_next_stage_element(self) -> Optional[IliasPageElement]:
if self._is_ilias_opencast_embedding(): if self._is_ilias_opencast_embedding():
return self.get_child_elements()[0].url return self.get_child_elements()[0]
if self._page_type == IliasElementType.VIDEO_FOLDER_MAYBE_PAGINATED:
return self._find_video_entries_paginated()[0]
return None return None
def _is_video_player(self) -> bool: def _is_video_player(self) -> bool:
@ -230,12 +244,16 @@ class IliasPage:
parent_row: Tag = link.findParent("tr") parent_row: Tag = link.findParent("tr")
children: List[Tag] = parent_row.findChildren("td") children: List[Tag] = parent_row.findChildren("td")
# <checkbox> <name> <uploader> <date> <download>
# 0 1 2 3 4
name = _sanitize_path_name(children[1].getText().strip()) name = _sanitize_path_name(children[1].getText().strip())
date = demangle_date(children[3].getText().strip())
log.explain(f"Found exercise detail entry {name!r}") log.explain(f"Found exercise detail entry {name!r}")
for child in reversed(children):
date = demangle_date(child.getText().strip(), fail_silently=True)
if date is not None:
break
if date is None:
log.warn(f"Date parsing failed for exercise entry {name!r}")
results.append(IliasPageElement( results.append(IliasPageElement(
IliasElementType.FILE, IliasElementType.FILE,
self._abs_url_from_link(link), self._abs_url_from_link(link),
@ -289,7 +307,13 @@ class IliasPage:
# Add each listing as a new # Add each listing as a new
for listing in file_listings: for listing in file_listings:
file_name = _sanitize_path_name(listing.getText().strip()) parent_container: Tag = listing.findParent(
"div", attrs={"class": lambda x: x and "form-group" in x}
)
label_container: Tag = parent_container.find(
attrs={"class": lambda x: x and "control-label" in x}
)
file_name = _sanitize_path_name(label_container.getText().strip())
url = self._abs_url_from_link(listing) url = self._abs_url_from_link(listing)
log.explain(f"Found exercise detail {file_name!r} at {url}") log.explain(f"Found exercise detail {file_name!r} at {url}")
results.append(IliasPageElement( results.append(IliasPageElement(
@ -470,7 +494,7 @@ class IliasPage:
return None return None
if "opencast" in str(img_tag["alt"]).lower(): if "opencast" in str(img_tag["alt"]).lower():
return IliasElementType.VIDEO_FOLDER return IliasElementType.VIDEO_FOLDER_MAYBE_PAGINATED
if str(img_tag["src"]).endswith("icon_exc.svg"): if str(img_tag["src"]).endswith("icon_exc.svg"):
return IliasElementType.EXERCISE return IliasElementType.EXERCISE
@ -478,6 +502,9 @@ class IliasPage:
if str(img_tag["src"]).endswith("icon_webr.svg"): if str(img_tag["src"]).endswith("icon_webr.svg"):
return IliasElementType.LINK return IliasElementType.LINK
if str(img_tag["src"]).endswith("icon_book.svg"):
return IliasElementType.BOOKING
if str(img_tag["src"]).endswith("frm.svg"): if str(img_tag["src"]).endswith("frm.svg"):
return IliasElementType.FORUM return IliasElementType.FORUM
@ -522,7 +549,7 @@ german_months = ['Jan', 'Feb', 'Mär', 'Apr', 'Mai', 'Jun', 'Jul', 'Aug', 'Sep',
english_months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'] english_months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
def demangle_date(date_str: str) -> Optional[datetime]: def demangle_date(date_str: str, fail_silently: bool = False) -> Optional[datetime]:
""" """
Demangle a given date in one of the following formats: Demangle a given date in one of the following formats:
"Gestern, HH:MM" "Gestern, HH:MM"
@ -554,6 +581,7 @@ def demangle_date(date_str: str) -> Optional[datetime]:
return datetime(year, month, day, hour, minute) return datetime(year, month, day, hour, minute)
except Exception: except Exception:
if not fail_silently:
log.warn(f"Date parsing failed for {date_str!r}") log.warn(f"Date parsing failed for {date_str!r}")
return None return None

View File

@ -12,7 +12,7 @@ from ...config import Config
from ...logging import ProgressBar, log from ...logging import ProgressBar, log
from ...output_dir import FileSink, Redownload from ...output_dir import FileSink, Redownload
from ...utils import fmt_path, soupify, url_set_query_param from ...utils import fmt_path, soupify, url_set_query_param
from ..crawler import CrawlError, CrawlWarning, anoncritical from ..crawler import CrawlError, CrawlToken, CrawlWarning, DownloadToken, anoncritical
from ..http_crawler import HttpCrawler, HttpCrawlerSection from ..http_crawler import HttpCrawler, HttpCrawlerSection
from .file_templates import Links from .file_templates import Links
from .kit_ilias_html import IliasElementType, IliasPage, IliasPageElement from .kit_ilias_html import IliasElementType, IliasPage, IliasPageElement
@ -21,7 +21,6 @@ TargetType = Union[str, int]
class KitIliasWebCrawlerSection(HttpCrawlerSection): class KitIliasWebCrawlerSection(HttpCrawlerSection):
def target(self) -> TargetType: def target(self) -> TargetType:
target = self.s.get("target") target = self.s.get("target")
if not target: if not target:
@ -82,17 +81,16 @@ _VIDEO_ELEMENTS: Set[IliasElementType] = set([
IliasElementType.VIDEO_FOLDER_MAYBE_PAGINATED, IliasElementType.VIDEO_FOLDER_MAYBE_PAGINATED,
]) ])
AWrapped = TypeVar("AWrapped", bound=Callable[..., Awaitable[None]]) AWrapped = TypeVar("AWrapped", bound=Callable[..., Awaitable[Optional[Any]]])
def _iorepeat(attempts: int, name: str) -> Callable[[AWrapped], AWrapped]: def _iorepeat(attempts: int, name: str) -> Callable[[AWrapped], AWrapped]:
def decorator(f: AWrapped) -> AWrapped: def decorator(f: AWrapped) -> AWrapped:
async def wrapper(*args: Any, **kwargs: Any) -> None: async def wrapper(*args: Any, **kwargs: Any) -> Optional[Any]:
last_exception: Optional[BaseException] = None last_exception: Optional[BaseException] = None
for round in range(attempts): for round in range(attempts):
try: try:
await f(*args, **kwargs) return await f(*args, **kwargs)
return
except aiohttp.ContentTypeError: # invalid content type except aiohttp.ContentTypeError: # invalid content type
raise CrawlWarning("ILIAS returned an invalid content type") raise CrawlWarning("ILIAS returned an invalid content type")
except aiohttp.TooManyRedirects: except aiohttp.TooManyRedirects:
@ -164,6 +162,12 @@ class KitIliasWebCrawler(HttpCrawler):
auth = section.auth(authenticators) auth = section.auth(authenticators)
super().__init__(name, section, config, shared_auth=auth) super().__init__(name, section, config, shared_auth=auth)
if section.tasks() > 1:
log.warn("""
Please avoid using too many parallel requests as these are the KIT ILIAS
instance's greatest bottleneck.
""".strip())
self._shibboleth_login = KitShibbolethLogin( self._shibboleth_login = KitShibbolethLogin(
auth, auth,
section.tfa_auth(authenticators), section.tfa_auth(authenticators),
@ -225,17 +229,34 @@ class KitIliasWebCrawler(HttpCrawler):
# Fill up our task list with the found elements # Fill up our task list with the found elements
await gather_elements() await gather_elements()
tasks = [self._handle_ilias_element(PurePath("."), element) for element in elements]
elements.sort(key=lambda e: e.id())
tasks: List[Awaitable[None]] = []
for element in elements:
if handle := await self._handle_ilias_element(PurePath("."), element):
tasks.append(asyncio.create_task(handle))
# And execute them # And execute them
await self.gather(tasks) await self.gather(tasks)
async def _handle_ilias_page(self, url: str, parent: IliasPageElement, path: PurePath) -> None: async def _handle_ilias_page(
self,
url: str,
parent: IliasPageElement,
path: PurePath,
) -> Optional[Awaitable[None]]:
maybe_cl = await self.crawl(path) maybe_cl = await self.crawl(path)
if not maybe_cl: if not maybe_cl:
return return None
cl = maybe_cl # Not mypy's fault, but explained here: https://github.com/python/mypy/issues/2608 return self._crawl_ilias_page(url, parent, maybe_cl)
async def _crawl_ilias_page(
self,
url: str,
parent: IliasPageElement,
cl: CrawlToken,
) -> None:
elements: List[IliasPageElement] = [] elements: List[IliasPageElement] = []
@_iorepeat(3, "crawling folder") @_iorepeat(3, "crawling folder")
@ -243,19 +264,30 @@ class KitIliasWebCrawler(HttpCrawler):
elements.clear() elements.clear()
async with cl: async with cl:
next_stage_url: Optional[str] = url next_stage_url: Optional[str] = url
current_parent = parent
while next_stage_url: while next_stage_url:
soup = await self._get_page(next_stage_url) soup = await self._get_page(next_stage_url)
log.explain_topic(f"Parsing HTML page for {fmt_path(path)}") log.explain_topic(f"Parsing HTML page for {fmt_path(cl.path)}")
log.explain(f"URL: {next_stage_url}") log.explain(f"URL: {next_stage_url}")
page = IliasPage(soup, url, parent) page = IliasPage(soup, next_stage_url, current_parent)
next_stage_url = page.get_next_stage_url() if next_element := page.get_next_stage_element():
current_parent = next_element
next_stage_url = next_element.url
else:
next_stage_url = None
elements.extend(page.get_child_elements()) elements.extend(page.get_child_elements())
# Fill up our task list with the found elements # Fill up our task list with the found elements
await gather_elements() await gather_elements()
tasks = [self._handle_ilias_element(cl.path, element) for element in elements]
elements.sort(key=lambda e: e.id())
tasks: List[Awaitable[None]] = []
for element in elements:
if handle := await self._handle_ilias_element(cl.path, element):
tasks.append(asyncio.create_task(handle))
# And execute them # And execute them
await self.gather(tasks) await self.gather(tasks)
@ -264,7 +296,11 @@ class KitIliasWebCrawler(HttpCrawler):
# Shouldn't happen but we also really don't want to let I/O errors bubble up to anoncritical. # Shouldn't happen but we also really don't want to let I/O errors bubble up to anoncritical.
# If that happens we will be terminated as anoncritical doesn't tream them as non-critical. # If that happens we will be terminated as anoncritical doesn't tream them as non-critical.
@_wrap_io_in_warning("handling ilias element") @_wrap_io_in_warning("handling ilias element")
async def _handle_ilias_element(self, parent_path: PurePath, element: IliasPageElement) -> None: async def _handle_ilias_element(
self,
parent_path: PurePath,
element: IliasPageElement,
) -> Optional[Awaitable[None]]:
element_path = PurePath(parent_path, element.name) element_path = PurePath(parent_path, element.name)
if element.type in _VIDEO_ELEMENTS: if element.type in _VIDEO_ELEMENTS:
@ -272,35 +308,43 @@ class KitIliasWebCrawler(HttpCrawler):
if not self._videos: if not self._videos:
log.explain("Video crawling is disabled") log.explain("Video crawling is disabled")
log.explain("Answer: no") log.explain("Answer: no")
return return None
else: else:
log.explain("Video crawling is enabled") log.explain("Video crawling is enabled")
log.explain("Answer: yes") log.explain("Answer: yes")
if element.type == IliasElementType.FILE: if element.type == IliasElementType.FILE:
await self._download_file(element, element_path) return await self._handle_file(element, element_path)
elif element.type == IliasElementType.FORUM: elif element.type == IliasElementType.FORUM:
log.explain_topic(f"Decision: Crawl {fmt_path(element_path)}") log.explain_topic(f"Decision: Crawl {fmt_path(element_path)}")
log.explain("Forums are not supported") log.explain("Forums are not supported")
log.explain("Answer: No") log.explain("Answer: No")
return None
elif element.type == IliasElementType.TEST: elif element.type == IliasElementType.TEST:
log.explain_topic(f"Decision: Crawl {fmt_path(element_path)}") log.explain_topic(f"Decision: Crawl {fmt_path(element_path)}")
log.explain("Tests contain no relevant files") log.explain("Tests contain no relevant files")
log.explain("Answer: No") log.explain("Answer: No")
return None
elif element.type == IliasElementType.LINK: elif element.type == IliasElementType.LINK:
await self._download_link(element, element_path) return await self._handle_link(element, element_path)
elif element.type == IliasElementType.BOOKING:
return await self._handle_booking(element, element_path)
elif element.type == IliasElementType.VIDEO: elif element.type == IliasElementType.VIDEO:
await self._download_file(element, element_path) return await self._handle_file(element, element_path)
elif element.type == IliasElementType.VIDEO_PLAYER: elif element.type == IliasElementType.VIDEO_PLAYER:
await self._download_video(element, element_path) return await self._handle_video(element, element_path)
elif element.type in _DIRECTORY_PAGES: elif element.type in _DIRECTORY_PAGES:
await self._handle_ilias_page(element.url, element, element_path) return await self._handle_ilias_page(element.url, element, element_path)
else: else:
# This will retry it a few times, failing everytime. It doesn't make any network # This will retry it a few times, failing everytime. It doesn't make any network
# requests, so that's fine. # requests, so that's fine.
raise CrawlWarning(f"Unknown element type: {element.type!r}") raise CrawlWarning(f"Unknown element type: {element.type!r}")
async def _download_link(self, element: IliasPageElement, element_path: PurePath) -> None: async def _handle_link(
self,
element: IliasPageElement,
element_path: PurePath,
) -> Optional[Awaitable[None]]:
log.explain_topic(f"Decision: Crawl Link {fmt_path(element_path)}") log.explain_topic(f"Decision: Crawl Link {fmt_path(element_path)}")
log.explain(f"Links type is {self._links}") log.explain(f"Links type is {self._links}")
@ -308,32 +352,72 @@ class KitIliasWebCrawler(HttpCrawler):
link_extension = self._links.extension() link_extension = self._links.extension()
if not link_template_maybe or not link_extension: if not link_template_maybe or not link_extension:
log.explain("Answer: No") log.explain("Answer: No")
return return None
else: else:
log.explain("Answer: Yes") log.explain("Answer: Yes")
link_template = link_template_maybe
element_path = element_path.with_name(element_path.name + link_extension) element_path = element_path.with_name(element_path.name + link_extension)
maybe_dl = await self.download(element_path, mtime=element.mtime) maybe_dl = await self.download(element_path, mtime=element.mtime)
if not maybe_dl: if not maybe_dl:
return return None
dl = maybe_dl # Not mypy's fault, but explained here: https://github.com/python/mypy/issues/2608
return self._download_link(element, link_template_maybe, maybe_dl)
@_iorepeat(3, "resolving link") @_iorepeat(3, "resolving link")
async def impl() -> None: async def _download_link(self, element: IliasPageElement, link_template: str, dl: DownloadToken) -> None:
async with dl as (bar, sink): async with dl as (bar, sink):
export_url = element.url.replace("cmd=calldirectlink", "cmd=exportHTML") export_url = element.url.replace("cmd=calldirectlink", "cmd=exportHTML")
real_url = await self._resolve_link_target(export_url) real_url = await self._resolve_link_target(export_url)
self._write_link_content(link_template, real_url, element.name, element.description, sink)
def _write_link_content(
self,
link_template: str,
url: str,
name: str,
description: Optional[str],
sink: FileSink,
) -> None:
content = link_template content = link_template
content = content.replace("{{link}}", real_url) content = content.replace("{{link}}", url)
content = content.replace("{{name}}", element.name) content = content.replace("{{name}}", name)
content = content.replace("{{description}}", str(element.description)) content = content.replace("{{description}}", str(description))
content = content.replace("{{redirect_delay}}", str(self._link_file_redirect_delay)) content = content.replace("{{redirect_delay}}", str(self._link_file_redirect_delay))
sink.file.write(content.encode("utf-8")) sink.file.write(content.encode("utf-8"))
sink.done() sink.done()
await impl() async def _handle_booking(
self,
element: IliasPageElement,
element_path: PurePath,
) -> Optional[Awaitable[None]]:
log.explain_topic(f"Decision: Crawl Booking Link {fmt_path(element_path)}")
log.explain(f"Links type is {self._links}")
link_template_maybe = self._links.template()
link_extension = self._links.extension()
if not link_template_maybe or not link_extension:
log.explain("Answer: No")
return None
else:
log.explain("Answer: Yes")
element_path = element_path.with_name(element_path.name + link_extension)
maybe_dl = await self.download(element_path, mtime=element.mtime)
if not maybe_dl:
return None
return self._download_booking(element, link_template_maybe, maybe_dl)
@_iorepeat(3, "resolving booking")
async def _download_booking(
self,
element: IliasPageElement,
link_template: str,
dl: DownloadToken,
) -> None:
async with dl as (bar, sink):
self._write_link_content(link_template, element.url, element.name, element.description, sink)
async def _resolve_link_target(self, export_url: str) -> str: async def _resolve_link_target(self, export_url: str) -> str:
async with self.session.get(export_url, allow_redirects=False) as resp: async with self.session.get(export_url, allow_redirects=False) as resp:
@ -350,16 +434,20 @@ class KitIliasWebCrawler(HttpCrawler):
raise CrawlError("resolve_link_target failed even after authenticating") raise CrawlError("resolve_link_target failed even after authenticating")
async def _download_video(self, element: IliasPageElement, element_path: PurePath) -> None: async def _handle_video(
self,
element: IliasPageElement,
element_path: PurePath,
) -> Optional[Awaitable[None]]:
# Videos will NOT be redownloaded - their content doesn't really change and they are chunky # Videos will NOT be redownloaded - their content doesn't really change and they are chunky
maybe_dl = await self.download(element_path, mtime=element.mtime, redownload=Redownload.NEVER) maybe_dl = await self.download(element_path, mtime=element.mtime, redownload=Redownload.NEVER)
if not maybe_dl: if not maybe_dl:
return return None
dl = maybe_dl # Not mypy's fault, but explained here: https://github.com/python/mypy/issues/2608
return self._download_video(element, maybe_dl)
@_iorepeat(3, "downloading video") @_iorepeat(3, "downloading video")
async def impl() -> None: async def _download_video(self, element: IliasPageElement, dl: DownloadToken) -> None:
assert dl # The function is only reached when dl is not None
async with dl as (bar, sink): async with dl as (bar, sink):
page = IliasPage(await self._get_page(element.url), element.url, element) page = IliasPage(await self._get_page(element.url), element.url, element)
real_element = page.get_child_elements()[0] real_element = page.get_child_elements()[0]
@ -368,22 +456,22 @@ class KitIliasWebCrawler(HttpCrawler):
await self._stream_from_url(real_element.url, sink, bar, is_video=True) await self._stream_from_url(real_element.url, sink, bar, is_video=True)
await impl() async def _handle_file(
self,
async def _download_file(self, element: IliasPageElement, element_path: PurePath) -> None: element: IliasPageElement,
element_path: PurePath,
) -> Optional[Awaitable[None]]:
maybe_dl = await self.download(element_path, mtime=element.mtime) maybe_dl = await self.download(element_path, mtime=element.mtime)
if not maybe_dl: if not maybe_dl:
return return None
dl = maybe_dl # Not mypy's fault, but explained here: https://github.com/python/mypy/issues/2608 return self._download_file(element, maybe_dl)
@_iorepeat(3, "downloading file") @_iorepeat(3, "downloading file")
async def impl() -> None: async def _download_file(self, element: IliasPageElement, dl: DownloadToken) -> None:
assert dl # The function is only reached when dl is not None assert dl # The function is only reached when dl is not None
async with dl as (bar, sink): async with dl as (bar, sink):
await self._stream_from_url(element.url, sink, bar, is_video=False) await self._stream_from_url(element.url, sink, bar, is_video=False)
await impl()
async def _stream_from_url(self, url: str, sink: FileSink, bar: ProgressBar, is_video: bool) -> None: async def _stream_from_url(self, url: str, sink: FileSink, bar: ProgressBar, is_video: bool) -> None:
async def try_stream() -> bool: async def try_stream() -> bool:
async with self.session.get(url, allow_redirects=is_video) as resp: async with self.session.get(url, allow_redirects=is_video) as resp:

View File

@ -3,9 +3,9 @@ from typing import Dict, List, Optional
from rich.markup import escape from rich.markup import escape
from .auth import AUTHENTICATORS, Authenticator, AuthError from .auth import AUTHENTICATORS, Authenticator, AuthError, AuthSection
from .config import Config, ConfigOptionError from .config import Config, ConfigOptionError
from .crawl import CRAWLERS, Crawler, CrawlError, KitIliasWebCrawler from .crawl import CRAWLERS, Crawler, CrawlError, CrawlerSection, KitIliasWebCrawler
from .logging import log from .logging import log
from .utils import fmt_path from .utils import fmt_path
@ -15,30 +15,33 @@ class PferdLoadError(Exception):
class Pferd: class Pferd:
def __init__(self, config: Config, cli_crawlers: Optional[List[str]]): def __init__(self, config: Config, cli_crawlers: Optional[List[str]], cli_skips: Optional[List[str]]):
""" """
May throw PferdLoadError. May throw PferdLoadError.
""" """
self._config = config self._config = config
self._crawlers_to_run = self._find_crawlers_to_run(config, cli_crawlers) self._crawlers_to_run = self._find_crawlers_to_run(config, cli_crawlers, cli_skips)
self._authenticators: Dict[str, Authenticator] = {} self._authenticators: Dict[str, Authenticator] = {}
self._crawlers: Dict[str, Crawler] = {} self._crawlers: Dict[str, Crawler] = {}
def _find_crawlers_to_run(self, config: Config, cli_crawlers: Optional[List[str]]) -> List[str]: def _find_config_crawlers(self, config: Config) -> List[str]:
log.explain_topic("Deciding which crawlers to run") crawl_sections = []
crawl_sections = [name for name, _ in config.crawl_sections()]
for name, section in config.crawl_sections():
if CrawlerSection(section).skip():
log.explain(f"Skipping {name!r}")
else:
crawl_sections.append(name)
if cli_crawlers is None:
log.explain("No crawlers specified on CLI")
log.explain("Running all crawlers specified in config")
return crawl_sections return crawl_sections
def _find_cli_crawlers(self, config: Config, cli_crawlers: List[str]) -> List[str]:
if len(cli_crawlers) != len(set(cli_crawlers)): if len(cli_crawlers) != len(set(cli_crawlers)):
raise PferdLoadError("Some crawlers were selected multiple times") raise PferdLoadError("Some crawlers were selected multiple times")
log.explain("Crawlers specified on CLI") crawl_sections = [name for name, _ in config.crawl_sections()]
crawlers_to_run = [] # With crawl: prefix crawlers_to_run = [] # With crawl: prefix
unknown_names = [] # Without crawl: prefix unknown_names = [] # Without crawl: prefix
@ -62,10 +65,36 @@ class Pferd:
return crawlers_to_run return crawlers_to_run
def _find_crawlers_to_run(
self,
config: Config,
cli_crawlers: Optional[List[str]],
cli_skips: Optional[List[str]],
) -> List[str]:
log.explain_topic("Deciding which crawlers to run")
crawlers: List[str]
if cli_crawlers is None:
log.explain("No crawlers specified on CLI")
log.explain("Running crawlers specified in config")
crawlers = self._find_config_crawlers(config)
else:
log.explain("Crawlers specified on CLI")
crawlers = self._find_cli_crawlers(config, cli_crawlers)
skips = {f"crawl:{name}" for name in cli_skips} if cli_skips else set()
for crawler in crawlers:
if crawler in skips:
log.explain(f"Skipping crawler {crawler!r}")
crawlers = [crawler for crawler in crawlers if crawler not in skips]
return crawlers
def _load_authenticators(self) -> None: def _load_authenticators(self) -> None:
for name, section in self._config.auth_sections(): for name, section in self._config.auth_sections():
log.print(f"[bold bright_cyan]Loading[/] {escape(name)}") log.print(f"[bold bright_cyan]Loading[/] {escape(name)}")
auth_type = section.get("type")
auth_type = AuthSection(section).type()
authenticator_constructor = AUTHENTICATORS.get(auth_type) authenticator_constructor = AUTHENTICATORS.get(auth_type)
if authenticator_constructor is None: if authenticator_constructor is None:
raise ConfigOptionError(name, "type", f"Unknown authenticator type: {auth_type!r}") raise ConfigOptionError(name, "type", f"Unknown authenticator type: {auth_type!r}")
@ -80,7 +109,7 @@ class Pferd:
for name, section in self._config.crawl_sections(): for name, section in self._config.crawl_sections():
log.print(f"[bold bright_cyan]Loading[/] {escape(name)}") log.print(f"[bold bright_cyan]Loading[/] {escape(name)}")
crawl_type = section.get("type") crawl_type = CrawlerSection(section).type()
crawler_constructor = CRAWLERS.get(crawl_type) crawler_constructor = CRAWLERS.get(crawl_type)
if crawler_constructor is None: if crawler_constructor is None:
raise ConfigOptionError(name, "type", f"Unknown crawler type: {crawl_type!r}") raise ConfigOptionError(name, "type", f"Unknown crawler type: {crawl_type!r}")

View File

@ -1,151 +1,166 @@
# I'm sorry that this code has become a bit dense and unreadable. While
# reading, it is important to remember what True and False mean. I'd love to
# have some proper sum-types for the inputs and outputs, they'd make this code
# a lot easier to understand.
import ast import ast
import re import re
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from dataclasses import dataclass
from enum import Enum
from pathlib import PurePath from pathlib import PurePath
from typing import Dict, Optional, Sequence, Union from typing import Callable, Dict, List, Optional, Sequence, TypeVar, Union
from .logging import log from .logging import log
from .utils import fmt_path from .utils import fmt_path, str_path
class Rule(ABC): class ArrowHead(Enum):
@abstractmethod NORMAL = 0
def transform(self, path: PurePath) -> Union[PurePath, bool]: SEQUENCE = 1
"""
Try to apply this rule to the path. Returns another path if the rule
was successfully applied, True if the rule matched but resulted in an
exclamation mark, and False if the rule didn't match at all.
"""
class Ignore:
pass pass
# These rules all use a Union[T, bool] for their right side. They are passed a class Empty:
# T if the arrow's right side was a normal string, True if it was an pass
# exclamation mark and False if it was missing entirely.
class NormalRule(Rule):
def __init__(self, left: PurePath, right: Union[PurePath, bool]):
self._left = left RightSide = Union[str, Ignore, Empty]
self._right = right
def _match_prefix(self, path: PurePath) -> Optional[PurePath]:
left_parts = list(reversed(self._left.parts))
path_parts = list(reversed(path.parts))
if len(left_parts) > len(path_parts): @dataclass
class Transformed:
path: PurePath
class Ignored:
pass
TransformResult = Optional[Union[Transformed, Ignored]]
@dataclass
class Rule:
left: str
left_index: int
name: str
head: ArrowHead
right: RightSide
right_index: int
def right_result(self, path: PurePath) -> Union[str, Transformed, Ignored]:
if isinstance(self.right, str):
return self.right
elif isinstance(self.right, Ignore):
return Ignored()
elif isinstance(self.right, Empty):
return Transformed(path)
else:
raise RuntimeError(f"Right side has invalid type {type(self.right)}")
class Transformation(ABC):
def __init__(self, rule: Rule):
self.rule = rule
@abstractmethod
def transform(self, path: PurePath) -> TransformResult:
pass
class ExactTf(Transformation):
def transform(self, path: PurePath) -> TransformResult:
if path != PurePath(self.rule.left):
return None return None
while left_parts and path_parts: right = self.rule.right_result(path)
left_part = left_parts.pop() if not isinstance(right, str):
path_part = path_parts.pop() return right
if left_part != path_part: return Transformed(PurePath(right))
class ExactReTf(Transformation):
def transform(self, path: PurePath) -> TransformResult:
match = re.fullmatch(self.rule.left, str_path(path))
if not match:
return None return None
if left_parts: right = self.rule.right_result(path)
return None if not isinstance(right, str):
return right
path_parts.reverse() # For some reason, mypy thinks that "groups" has type List[str]. But
return PurePath(*path_parts) # since elements of "match.groups()" can be None, mypy is wrong.
def transform(self, path: PurePath) -> Union[PurePath, bool]:
if rest := self._match_prefix(path):
if isinstance(self._right, bool):
return self._right or path
else:
return self._right / rest
return False
class ExactRule(Rule):
def __init__(self, left: PurePath, right: Union[PurePath, bool]):
self._left = left
self._right = right
def transform(self, path: PurePath) -> Union[PurePath, bool]:
if path == self._left:
if isinstance(self._right, bool):
return self._right or path
else:
return self._right
return False
class NameRule(Rule):
def __init__(self, subrule: Rule):
self._subrule = subrule
def transform(self, path: PurePath) -> Union[PurePath, bool]:
matched = False
result = PurePath()
for part in path.parts:
part_result = self._subrule.transform(PurePath(part))
if isinstance(part_result, PurePath):
matched = True
result /= part_result
elif part_result:
# If any subrule call ignores its path segment, the entire path
# should be ignored
return True
else:
# The subrule doesn't modify this segment, but maybe other
# segments
result /= part
if matched:
return result
else:
# The subrule has modified no segments, so this name version of it
# doesn't match
return False
class ReRule(Rule):
def __init__(self, left: str, right: Union[str, bool]):
self._left = left
self._right = right
def transform(self, path: PurePath) -> Union[PurePath, bool]:
if match := re.fullmatch(self._left, str(path)):
if isinstance(self._right, bool):
return self._right or path
vars: Dict[str, Union[str, int, float]] = {}
# For some reason, mypy thinks that "groups" has type List[str].
# But since elements of "match.groups()" can be None, mypy is
# wrong.
groups: Sequence[Optional[str]] = [match[0]] + list(match.groups()) groups: Sequence[Optional[str]] = [match[0]] + list(match.groups())
locals_dir: Dict[str, Union[str, int, float]] = {}
for i, group in enumerate(groups): for i, group in enumerate(groups):
if group is None: if group is None:
continue continue
vars[f"g{i}"] = group locals_dir[f"g{i}"] = group
try: try:
vars[f"i{i}"] = int(group) locals_dir[f"i{i}"] = int(group)
except ValueError: except ValueError:
pass pass
try: try:
vars[f"f{i}"] = float(group) locals_dir[f"f{i}"] = float(group)
except ValueError: except ValueError:
pass pass
result = eval(f"f{self._right!r}", vars) result = eval(f"f{right!r}", {}, locals_dir)
return PurePath(result) return Transformed(PurePath(result))
return False
class RenamingParentsTf(Transformation):
def __init__(self, sub_tf: Transformation):
super().__init__(sub_tf.rule)
self.sub_tf = sub_tf
def transform(self, path: PurePath) -> TransformResult:
for i in range(len(path.parts), -1, -1):
parent = PurePath(*path.parts[:i])
child = PurePath(*path.parts[i:])
transformed = self.sub_tf.transform(parent)
if not transformed:
continue
elif isinstance(transformed, Transformed):
return Transformed(transformed.path / child)
elif isinstance(transformed, Ignored):
return transformed
else:
raise RuntimeError(f"Invalid transform result of type {type(transformed)}: {transformed}")
return None
class RenamingPartsTf(Transformation):
def __init__(self, sub_tf: Transformation):
super().__init__(sub_tf.rule)
self.sub_tf = sub_tf
def transform(self, path: PurePath) -> TransformResult:
result = PurePath()
any_part_matched = False
for part in path.parts:
transformed = self.sub_tf.transform(PurePath(part))
if not transformed:
result /= part
elif isinstance(transformed, Transformed):
result /= transformed.path
any_part_matched = True
elif isinstance(transformed, Ignored):
return transformed
else:
raise RuntimeError(f"Invalid transform result of type {type(transformed)}: {transformed}")
if any_part_matched:
return Transformed(result)
else:
return None
class RuleParseError(Exception): class RuleParseError(Exception):
@ -162,18 +177,15 @@ class RuleParseError(Exception):
log.error_contd(f"{spaces}^--- {self.reason}") log.error_contd(f"{spaces}^--- {self.reason}")
T = TypeVar("T")
class Line: class Line:
def __init__(self, line: str, line_nr: int): def __init__(self, line: str, line_nr: int):
self._line = line self._line = line
self._line_nr = line_nr self._line_nr = line_nr
self._index = 0 self._index = 0
def get(self) -> Optional[str]:
if self._index < len(self._line):
return self._line[self._index]
return None
@property @property
def line(self) -> str: def line(self) -> str:
return self._line return self._line
@ -190,155 +202,196 @@ class Line:
def index(self, index: int) -> None: def index(self, index: int) -> None:
self._index = index self._index = index
def advance(self) -> None: @property
self._index += 1 def rest(self) -> str:
return self.line[self.index:]
def expect(self, string: str) -> None: def peek(self, amount: int = 1) -> str:
for char in string: return self.rest[:amount]
if self.get() == char:
self.advance() def take(self, amount: int = 1) -> str:
string = self.peek(amount)
self.index += len(string)
return string
def expect(self, string: str) -> str:
if self.peek(len(string)) == string:
return self.take(len(string))
else: else:
raise RuleParseError(self, f"Expected {char!r}") raise RuleParseError(self, f"Expected {string!r}")
def expect_with(self, string: str, value: T) -> T:
self.expect(string)
return value
def one_of(self, parsers: List[Callable[[], T]], description: str) -> T:
for parser in parsers:
index = self.index
try:
return parser()
except RuleParseError:
self.index = index
raise RuleParseError(self, description)
# RULE = LEFT SPACE '-' NAME '-' HEAD (SPACE RIGHT)?
# SPACE = ' '+
# NAME = '' | 'exact' | 'name' | 're' | 'exact-re' | 'name-re'
# HEAD = '>' | '>>'
# LEFT = STR | QUOTED_STR
# RIGHT = STR | QUOTED_STR | '!'
def parse_zero_or_more_spaces(line: Line) -> None:
while line.peek() == " ":
line.take()
def parse_one_or_more_spaces(line: Line) -> None:
line.expect(" ")
parse_zero_or_more_spaces(line)
def parse_str(line: Line) -> str:
result = []
while c := line.peek():
if c == " ":
break
else:
line.take()
result.append(c)
if result:
return "".join(result)
else:
raise RuleParseError(line, "Expected non-space character")
QUOTATION_MARKS = {'"', "'"} QUOTATION_MARKS = {'"', "'"}
def parse_string_literal(line: Line) -> str: def parse_quoted_str(line: Line) -> str:
escaped = False escaped = False
# Points to first character of string literal # Points to first character of string literal
start_index = line.index start_index = line.index
quotation_mark = line.get() quotation_mark = line.peek()
if quotation_mark not in QUOTATION_MARKS: if quotation_mark not in QUOTATION_MARKS:
# This should never happen as long as this function is only called from raise RuleParseError(line, "Expected quotation mark")
# parse_string. line.take()
raise RuleParseError(line, "Invalid quotation mark")
line.advance()
while c := line.get(): while c := line.peek():
if escaped: if escaped:
escaped = False escaped = False
line.advance() line.take()
elif c == quotation_mark: elif c == quotation_mark:
line.advance() line.take()
stop_index = line.index stop_index = line.index
literal = line.line[start_index:stop_index] literal = line.line[start_index:stop_index]
try:
return ast.literal_eval(literal) return ast.literal_eval(literal)
except SyntaxError as e:
line.index = start_index
raise RuleParseError(line, str(e)) from e
elif c == "\\": elif c == "\\":
escaped = True escaped = True
line.advance() line.take()
else: else:
line.advance() line.take()
raise RuleParseError(line, "Expected end of string literal") raise RuleParseError(line, "Expected end of string literal")
def parse_until_space_or_eol(line: Line) -> str: def parse_left(line: Line) -> str:
result = [] if line.peek() in QUOTATION_MARKS:
while c := line.get(): return parse_quoted_str(line)
if c == " ":
break
result.append(c)
line.advance()
return "".join(result)
def parse_string(line: Line) -> Union[str, bool]:
if line.get() in QUOTATION_MARKS:
return parse_string_literal(line)
else: else:
string = parse_until_space_or_eol(line) return parse_str(line)
def parse_right(line: Line) -> Union[str, Ignore]:
c = line.peek()
if c in QUOTATION_MARKS:
return parse_quoted_str(line)
else:
string = parse_str(line)
if string == "!": if string == "!":
return True return Ignore()
return string return string
def parse_arrow(line: Line) -> str: def parse_arrow_name(line: Line) -> str:
line.expect("-") return line.one_of([
lambda: line.expect("exact-re"),
name = [] lambda: line.expect("exact"),
while True: lambda: line.expect("name-re"),
c = line.get() lambda: line.expect("name"),
if not c: lambda: line.expect("re"),
raise RuleParseError(line, "Expected rest of arrow") lambda: line.expect(""),
elif c == "-": ], "Expected arrow name")
line.advance()
c = line.get()
if not c:
raise RuleParseError(line, "Expected rest of arrow")
elif c == ">":
line.advance()
break # End of arrow
else:
name.append("-")
continue
else:
name.append(c)
line.advance()
return "".join(name)
def parse_whitespace(line: Line) -> None: def parse_arrow_head(line: Line) -> ArrowHead:
line.expect(" ") return line.one_of([
while line.get() == " ": lambda: line.expect_with(">>", ArrowHead.SEQUENCE),
line.advance() lambda: line.expect_with(">", ArrowHead.NORMAL),
], "Expected arrow head")
def parse_eol(line: Line) -> None: def parse_eol(line: Line) -> None:
if line.get() is not None: if line.peek():
raise RuleParseError(line, "Expected end of line") raise RuleParseError(line, "Expected end of line")
def parse_rule(line: Line) -> Rule: def parse_rule(line: Line) -> Rule:
# Parse left side parse_zero_or_more_spaces(line)
leftindex = line.index left_index = line.index
left = parse_string(line) left = parse_left(line)
if isinstance(left, bool):
line.index = leftindex
raise RuleParseError(line, "Left side can't be '!'")
leftpath = PurePath(left)
# Parse arrow parse_one_or_more_spaces(line)
parse_whitespace(line)
arrowindex = line.index
arrowname = parse_arrow(line)
# Parse right side line.expect("-")
if line.get(): name = parse_arrow_name(line)
parse_whitespace(line) line.expect("-")
right = parse_string(line) head = parse_arrow_head(line)
else:
right = False
rightpath: Union[PurePath, bool]
if isinstance(right, bool):
rightpath = right
else:
rightpath = PurePath(right)
right_index = line.index
right: RightSide
try:
parse_zero_or_more_spaces(line)
parse_eol(line)
right = Empty()
except RuleParseError:
line.index = right_index
parse_one_or_more_spaces(line)
right = parse_right(line)
parse_eol(line) parse_eol(line)
# Dispatch return Rule(left, left_index, name, head, right, right_index)
if arrowname == "":
return NormalRule(leftpath, rightpath)
elif arrowname == "name": def parse_transformation(line: Line) -> Transformation:
if len(leftpath.parts) > 1: rule = parse_rule(line)
line.index = leftindex
raise RuleParseError(line, "SOURCE must be a single name, not multiple segments") if rule.name == "":
return NameRule(ExactRule(leftpath, rightpath)) return RenamingParentsTf(ExactTf(rule))
elif arrowname == "exact": elif rule.name == "exact":
return ExactRule(leftpath, rightpath) return ExactTf(rule)
elif arrowname == "re": elif rule.name == "name":
return ReRule(left, right) if len(PurePath(rule.left).parts) > 1:
elif arrowname == "name-re": line.index = rule.left_index
return NameRule(ReRule(left, right)) raise RuleParseError(line, "Expected name, not multiple segments")
return RenamingPartsTf(ExactTf(rule))
elif rule.name == "re":
return RenamingParentsTf(ExactReTf(rule))
elif rule.name == "exact-re":
return ExactReTf(rule)
elif rule.name == "name-re":
return RenamingPartsTf(ExactReTf(rule))
else: else:
line.index = arrowindex + 1 # For nicer error message raise RuntimeError(f"Invalid arrow name {rule.name!r}")
raise RuleParseError(line, f"Invalid arrow name {arrowname!r}")
class Transformer: class Transformer:
@ -347,32 +400,40 @@ class Transformer:
May throw a RuleParseException. May throw a RuleParseException.
""" """
self._rules = [] self._tfs = []
for i, line in enumerate(rules.split("\n")): for i, line in enumerate(rules.split("\n")):
line = line.strip() line = line.strip()
if line: if line:
rule = parse_rule(Line(line, i)) tf = parse_transformation(Line(line, i))
self._rules.append((line, rule)) self._tfs.append((line, tf))
def transform(self, path: PurePath) -> Optional[PurePath]: def transform(self, path: PurePath) -> Optional[PurePath]:
for i, (line, rule) in enumerate(self._rules): for i, (line, tf) in enumerate(self._tfs):
log.explain(f"Testing rule {i+1}: {line}") log.explain(f"Testing rule {i+1}: {line}")
try: try:
result = rule.transform(path) result = tf.transform(path)
except Exception as e: except Exception as e:
log.warn(f"Error while testing rule {i+1}: {line}") log.warn(f"Error while testing rule {i+1}: {line}")
log.warn_contd(str(e)) log.warn_contd(str(e))
continue continue
if isinstance(result, PurePath): if not result:
log.explain(f"Match found, transformed path to {fmt_path(result)}")
return result
elif result: # Exclamation mark
log.explain("Match found, path ignored")
return None
else:
continue continue
log.explain("No rule matched, path is unchanged") if isinstance(result, Ignored):
log.explain("Match found, path ignored")
return None
if tf.rule.head == ArrowHead.NORMAL:
log.explain(f"Match found, transformed path to {fmt_path(result.path)}")
path = result.path
break
elif tf.rule.head == ArrowHead.SEQUENCE:
log.explain(f"Match found, updated path to {fmt_path(result.path)}")
path = result.path
else:
raise RuntimeError(f"Invalid transform result of type {type(result)}: {result}")
log.explain(f"Final result: {fmt_path(path)}")
return path return path

View File

@ -91,8 +91,14 @@ def url_set_query_params(url: str, params: Dict[str, str]) -> str:
return result return result
def str_path(path: PurePath) -> str:
if not path.parts:
return "."
return "/".join(path.parts)
def fmt_path(path: PurePath) -> str: def fmt_path(path: PurePath) -> str:
return repr(str(path)) return repr(str_path(path))
def fmt_real_path(path: Path) -> str: def fmt_real_path(path: Path) -> str:

View File

@ -1,2 +1,2 @@
NAME = "PFERD" NAME = "PFERD"
VERSION = "3.0.0" VERSION = "3.2.0"

View File

@ -28,9 +28,9 @@ The use of [venv](https://docs.python.org/3/library/venv.html) is recommended.
## Basic usage ## Basic usage
PFERD can be run directly from the command line with no config file. PFERD can be run directly from the command line with no config file. Run `pferd
Run `pferd -h` to get an overview of available commands and options. -h` to get an overview of available commands and options. Run `pferd <command>
Run `pferd <command> -h` to see which options a command has. -h` to see which options a command has.
For example, you can download your personal desktop from the KIT ILIAS like For example, you can download your personal desktop from the KIT ILIAS like
this: this:
@ -116,17 +116,18 @@ transform =
Online-Tests --> ! Online-Tests --> !
Vorlesungswerbung --> ! Vorlesungswerbung --> !
# Rename folders
Lehrbücher --> Vorlesung
# Note the ">>" arrow head which lets us apply further rules to files moved to "Übung"
Übungsunterlagen -->> Übung
# Move exercises to own folder. Rename them to "Blatt-XX.pdf" to make them sort properly # Move exercises to own folder. Rename them to "Blatt-XX.pdf" to make them sort properly
"Übungsunterlagen/(\d+). Übungsblatt.pdf" -re-> Blätter/Blatt-{i1:02}.pdf "Übung/(\d+). Übungsblatt.pdf" -re-> Blätter/Blatt-{i1:02}.pdf
# Move solutions to own folder. Rename them to "Blatt-XX-Lösung.pdf" to make them sort properly # Move solutions to own folder. Rename them to "Blatt-XX-Lösung.pdf" to make them sort properly
"Übungsunterlagen/(\d+). Übungsblatt.*Musterlösung.pdf" -re-> Blätter/Blatt-{i1:02}-Lösung.pdf "Übung/(\d+). Übungsblatt.*Musterlösung.pdf" -re-> Blätter/Blatt-{i1:02}-Lösung.pdf
# The course has nested folders with the same name - flatten them # The course has nested folders with the same name - flatten them
"Übungsunterlagen/(.+?)/\\1/(.*)" -re-> Übung/{g1}/{g2} "Übung/(.+?)/\\1" -re-> Übung/{g1}
# Rename remaining folders
Übungsunterlagen --> Übung
Lehrbücher --> Vorlesung
[crawl:Bar] [crawl:Bar]
type = kit-ilias-web type = kit-ilias-web

View File

@ -12,6 +12,6 @@ pip install --upgrade setuptools
# Installing PFERD itself # Installing PFERD itself
pip install --editable . pip install --editable .
# Installing various tools # Installing tools and type hints
pip install --upgrade mypy flake8 autopep8 isort pip install --upgrade mypy flake8 autopep8 isort pyinstaller
pip install --upgrade pyinstaller pip install --upgrade types-chardet types-certifi