When printing out the configuration JSON, the Redirector extension
expects regex escapes to be escaped, themselves. So `\` becomes `\\`.
However, Crystal treats these as escaped character also, and each `\`
must additionally be escaped, so a single slash becomes `\\\\`
Instead of providing long detailed instructions for how to configure
the Redirector extension, this provides a single json file that users
can import. I started by making a single file stored in the
`public/assets` directory, but then realized this was a regression since
the instructions were customized to each domain. Instead I can use
Lucky's [data] response to dynamically build the JSON config.
[data]:
https://luckyframework.org/guides/http-and-routing/request-and-response#
handling-responses
This sets the width of code blocks to be the width of the page, and
adds a scrollbar for long blocks. Article `c146e768bb41` has some
examples.
I could have also wrapped the codeblocks, but as pointed out by
[~kaki87] this often reduces readability. Hence: scrollbars.
[~kaki87]: https://todo.sr.ht/~edwardloveall/Scribe/6#event-188395
Posts, like 8661f4724aa9, can go missing if the account or post was
removed. In this case, the API returns data like this:
```json
{
"data": {
"post": null
}
}
```
When this happens, we can detect it because the parsed response now has
a nil value: `response.data.post == nil` and construct an `EmptyPage`
instead of a `Page`. The `Articles::Show` action can then render
conditionally based on if the response from `PageConverter` is a `Page`
or an `EmptyPage`.
Since the application does not use a database, it's confusing to have to
set a bogus database URL environment variable. This change follows [the
Lucky guide][1] suggestion for disabling the need for database
configuration. That makes the setup a little easier.
[1]:
https://www.luckyframework.org/guides/database/intro-to-avram-and-orms
Since the article ID regular expression wasn't anchored to the end of
the URL, it would grab characters after a / or - that were hex
characters. For example /@user/bacon-123abc would just grab `bac`. Not
great.
This anchors the ID at the end of the string so that it will be more
likely to catch IDs.
Previously the link on the error page was only linking to the path
component of the url, e.g. `/search` but ignoring any query params e.g.
`/search?q=hello`. This uses the HTTP::Request `resource` method which
appears to capture both.
A new ArticleIdParser class takes in an HTTP::Request object and parses
the article ID from it. It intentinoally fails on tag, user, and search
pages and attempts to only catch articles.
Instead of showing the default Lucky error page, the styles now match
Scribe. In addition, if a URL can't be parsed, Scribe gives some
information as to why this might be (that Scribe can only deal with an
article pages)
Medium uses UTF-16 character offsets (likely to make it easier to parse
in JavaScript) but Crystal uses UTF-8. Converting strings to UTF-16 to
do offset calculation then back to UFT-8 fixes some markup bugs.
---
Medium calculates markup offsets using UTF-16 encoding. Some characters
like Emoji are count as multiple bytes which affects those offsets. For
example in UTF-16 💸 is worth two bytes, but Crystal strings only count
it as one. This is a problem for markup generation because it can
offset the markup and even cause out-of-range errors.
Take the following example:
💸💸!
Imagine that `!` was bold but the emoji isn't. For Crystal, this starts
at char index 2, end at char index 3. Medium's markup will say markup
goes from character 4 to 5. In a 3 character string like this, trying
to access character range 4...5 is an error because 5 is already out of
bounds.
My theory is that this is meant to be compatible with JavaScript's
string length calculations, as Medium is primarily a platform built for
the web:
```js
"a".length // 1
"💸".length // 2
"👩❤️💋👩".length // 11
```
To get these same numbers in Crystal strings must be converted to
UTF-16:
```crystal
"a".to_utf16.size # 1
"💸".to_utf16.size # 2
"👩❤️💋👩".to_utf16.size # 11
```
The MarkupConverter now converts text into UFT-16 byte arrays on
initialization. Once it's figured out the range of bytes needed for
each piece of markup, it converts it back into UTF-8 strings.
Previously, GitHub gists were embedded. The gist url would be detected
in a paragraph and the page would render a script like:
```html
<script src="https://gist.github.com/user/gist_id.js"></script>
```
The script would then embed the gist on the page. However, gists contain
multiple files. It's technically possible to embed a single file in the
same way by appending a `file` query param:
```html
<script
src="https://gist.github.com/user/gist_id.js?file=foo.txt"></script>
```
I wanted to try and tackle proxying gists instead.
Overview
--------
At a high level the PageConverter kicks off the work of fetching and
storing the gist content, then sends that content down to the
`ParagraphConverter`. When a paragraph comes up that contains a gist
embed, it retrieves the previously fetched content. This allows all the
necessary content to be fetched up front so the minimum number of
requests need to be made.
Fetching Gists
--------------
There is now a `GithubClient` class that gets gist content from GitHub's
ReST API. The gist API response looks something like this (non-relevant
keys removed):
```json
{
"files": {
"file-one.txt": {
"filename": "file-one.txt",
"raw_url":
"https://gist.githubusercontent.com/<username>/<id>/raw/<file_id>/file-o
ne.txt",
"content": "..."
},
"file-two.txt": {
"filename": "file-two.txt",
"raw_url":
"https://gist.githubusercontent.com/<username>/<id>/raw/<file_id>/file-t
wo.txt",
"content": "..."
}
}
}
```
That response gets turned into a bunch of `GistFile` objects that are
then stored in a request-level `GistStore`. Crystal's JSON parsing does
not make it easy to parse json with arbitrary keys into objects. This is
because each key corresponds to an object property, like `property name
: String`. If Crystal doesn't know the keys ahead of time, there's no
way to know what methods to create.
That's a problem here because the key for each gist file is the unique
filename. Fortunately, the keys for each _file_ follows the same pattern
and are easy to parse into a `GistFile` object. To turn gist file JSON
into Crystal objects, the `GithubClient` turns the whole response into a
`JSON::Any` which is like a Hash. Then it extracts just the file data
objects and parses those into `GistFile` objects.
Those `GistFile` objects are then cached in a `GistStore` that is shared
for the page, which means one gist cache per request/article. `GistFile`
objects can be fetched out of the store by file, or if no file is
specified, it returns all files in the gist.
The GistFile is rendered as a link of the file's name to the file in
the gist on GitHub, and then a code block of the contents of the file.
In summary, the `PageConverter`:
* Scans the paragraphs for GitHub gists using `GistScanner`
* Requests their data from GitHub using the `GithubClient`
* Parses the response into `GistFile`s and populates the `GistStore`
* Passes that `GistStore` to the `ParagraphConverter` to use when
constructing the page nodes
Caching
-------
GitHub limits API requests to 5000/hour with a valid api token and
60/hour without. 60 is pretty tight for the usage that scribe.rip gets,
but 5000 is reasonable most of the time. Not every article has an
embedded gist, but some articles have multiple gists. A viral article
(of which Scribe has seen two at the time of this commit) might receive
a little over 127k hits/day, which is an average of over 5300/hour. If
that article had a gist, Scribe would reach the API limit during parts
of the day with high traffic. If it had multiple gists, it would hit it
even more. However, average traffic is around 30k visits/day which would
be well under the limit, assuming average load.
To help not hit that limit, a `GistStore` holds all the `GistFile`
objects per gist. The logic in `GistScanner` is smart enough to only
return unique gist URLs so each gist is only requested once even if
multiple files from one gist exist in an article. This limits the number
of times Scribe hits the GitHub API.
If Scribe is rate-limited, instead of populating a `GistCache` the
`PageConverter` will create a `RateLimitedGistStore`. This is an object
that acts like the `GistStore` but returns `RateLimitedGistFile` objects
instead of `GistFile` objects. This allows Scribe to gracefully degrade
in the event of reaching the rate limit.
If rate-limiting becomes a regular problem, Scribe could also be
reworked to fallback to the embedded gists again.
API Credentials
---------------
API credentials are in the form of a GitHub username and a personal
access token attached to that username. To get a token, visit
https://github.com/settings/tokens and create a new token. The only
permission it needs is `gist`.
This token is set via the `GITHUB_PERSONAL_ACCESS_TOKEN` environment
variable. The username also needs to be set via `GITHUB_USERNAME`. When
developing locally, these can both be set in the .env file.
Authentication is probably not necessary locally, but it's there if you
want to test. If either token is missing, unauthenticated requests are
made.
Rendering
---------
The node tree itself holds a `GithubGist` object. It has a reference to
the `GistStore` and the original gist URL. When it renders the page
requests the gist's `files`. The gist ID and optional file are detected,
and then used to request the file(s) from the `GistStore`. Gists render
as a list of each files contents and a link to the file on GitHub.
If the requests were rate limited, the store is a
`RateLimitedGistStore` and the files are `RateLimitedGistFile`s. These
rate-limited objects rendered with a link to the gist on GitHub and text
saying that Scribe has been rate-limited.
If somehow the file requested doesn't exist in the store, it displays
similarly to the rate-limited file but with "file missing" text instead
of "rate limited" text.
GitHub API docs: https://docs.github.com/en/rest/reference/gists
Rate Limiting docs:
https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-
limiting
This is an experiment to see if it forces me to actually have updated
the version before I build. The idea is that I need to actually commit
the version which will make it more likely that all instances can pull
down the code and display the correct version if I've done it myself.
It uses `git show` to grab the committed contents of src/version then
checks to see if it matches today's date.
The most common is "How do I set my custom domain" (answer: APP_DOMAIN)
but this also requires setting LUCKY_ENV=production which requires
SECRET_KEY_BASE, DATABASE_URL, and PORT
This specifies advanced options for configuring the Redirector
extension. If everything is let on (like images) things will break
(like images). It also improves the regular expression a bit to account
for the image CDN
Co-authored-by: Austin Huang <im@austinhuang.me>
The post id 34dead42a28 contained a new paragraph type: H2. Previously
the only known header types were H3 and H4. In this case, the paragraph
doesn't actually get rendered because it's the page title which is
removed from the page nodes (see commits 6baba803 and then fba87c10).
However, it somehow an author is able to get an H2 paragraph into the
page, it will display as an <h1> just as H3 displays as <h2> and H4
displays as <h3>.
This patch adds support for development with the Nix package manager. In
order to support the traditional nix-shell tool as well as the (still
experimental) Nix Flakes feature of the upcoming version of Nix, this
patch adds shell.nix *and* flake.nix/flake.lock. Usage instructions
have been added to the README.
This patch further improves the proposed pattern for the Redirector
extension. In contrast to the old pattern, …
* … it will redirect the URL https://medium.com.
* … it will *not* redirect URLs with top-level domains like mediumXcom.
(This point is purely theoretical, but it makes the regular expression
more correct and consistent.)
* … it will *not* redirect URLs like https://link.medium.com/AXEtCilplkb
which Scribe currently cannot handle. These are shortened URLs that
users get when they use the Twitter button on Medium to share a post.
In order to implement the last point (not matching link.medium.com), the
pattern uses negative lookbehind. This feature of regular expressions is
supported by all recent browsers for which Redirector is available
(Firefox, Chrome, Edge, Opera)[^1], including the current version of
Firefox ESR (Extended Stability Release).
[^1]: https://caniuse.com/js-regexp-lookbehind