(We’re at 13,000 signatures & counting.) adambandt.com/alanjones
The fix was to create a second updates feed, this one is available at: https://dobrado.net/testing. Now when an update is built, a post is created in this feed. I still write the post to the original updates feed at https://dobrado.net/updates, but these posts are now saved as a draft with a scheduled publish time. At the moment that time is set to 24 hours later. Of course my publishing tools didn't have a concept of draft posts so I had to add that too... next I need to add UI support to make this feature available to users, at the moment it is only used by the Autoupdate module.
Lastly I added support for removing updates. This needs to happen so that the current version number can be re-used rather than leaving the current broken version in place. Sites that are subscribed to the testing feed will also need to re-apply the update at the current version number, and this is done by the build server removing the matching post in the feed. When the Autoupdate module sees that the post was removed from the feed, it will remove the matching version of the update. This means it will be able to install that version again when it is eventually re-published. The draft post also needs to be removed on the build server, but after that anyone subscribed to the updates feed will never know there was a problem.
So now I have a couple of servers subscribed to the testing feed, and the others that I really don't want to break when I build an update are subscribed to the normal updates feed.
Chris Messina is known as a technologist, evangelist of the open source movement, and the inventor of the hashtag. As part of our TOA Podcast Studio series recorded during the Tech Open Air 2019 conference, Chris spoke about collaboration and ownership in the tech industry, pushing forward and letting go, and the strategies entrepreneurs need to take both personally […]
The post Chris Messina, inventor of the hashtag, on owning your ideas and imagining new realities appeared first on Samsung NEXT.
For the last few years I've worked on adding all sorts of IndieWeb building blocks to Dobrado, and there's enough pieces to play with that from a scratch-your-own-itch perspective you can stay itchy for a very long time!
But I've decided to step back and have a look at more than my own itches. Or maybe they're still mine, but looking at community goals could also be something to choose to work on? Something we hear a lot in the community is that IndieWeb is just too hard to get started in. I think that's true, but since everyone is a volunteer, there can't be any expectation that anyone is going to fix that problem. All you can do is recognize that it's an issue, and if you have some time and the inclination, work on it yourself.
For me, that means making Dobrado easier to use. Up until now installing the software has meant knowing how to use git and editing config files. After that, keeping it up to date meant more git commands and a basic understanding of how the software worked... not overly friendly! My solution to this was to create a new build system, which creates the updates and also produces a feed which you can follow at: https://dobrado.net/updates
The other half of this project was to create a new module that handles automatic updates by subscribing to this feed. Since Dobrado supports WebSub this update happens straight away. The feed items contain enough information for sites to fetch the update from dobrado.net. It creates notifications when it's updated your site too:
I have quite a few sites running Dobrado, so this change means I won't need to log in to all of them just to pull in the latest changes. It's probably the biggest change to the way I've developed the software in the last few years and I'm still getting used to it!
I'm hoping that has solved the update problem for anyone else that wants to use the software too. The next goal is still how to even get started... I've done some work on that too and will hopefully have more updates soon.
Gregor and I decided to have a go at implementing AutoAuth, after the session on private webmentions and different types of auth on the Saturday. That discussion brought up that AutoAuth was capable of replacing some of the earlier auth flows created to solve individual cases of sharing private data. I think that's a good sign for AutoAuth, because it's flexible enough to solve multiple problems.
That meant we had to pick a test case we would use to implement and demo using AutoAuth, and decided viewing a private post would be the simplest. Gregor already had support for private posts on his site, so we started from there and I would add support to view the post.
Our first challenge was just agreeing on how to read the spec! We had both read it before starting the hack day, but it's not a simple thing to get your head around. One of the best things we did was work through each step, once we had picked our roles. We implemented one step at a time, working on our own side of the flow, and luckily there was about the same amount of work to do each, so this worked well.
The first step was for Gregor to add a token endpoint to discover from his private post, and a WWW-Authenticate header. The process then is that when I fetch the private post I see this header and craft a POST request to his token endpoint. This request contains a bunch of information, with the goal being that I give Gregor's token endpoint enough information to find my authorization endpoint and be able to make a request to it on my behalf. I make sure that this request will be successful by storing the same authorization code that I send to the token endpoint. The thing that I really liked at this point was that I didn't need to change my authorization endpoint at all to make this happen. I could craft an entry in my authorization codes table that would pass when requested based on the AutoAuth spec.
After Gregor makes this request, he's happy that I have been identified and can be issued a token for his private post. I provide a callback url in my request, so that's where he sends the token. I store that on my server and can now fetch the private post again with the token in an Authorization header. This all worked pretty well and our 2 minute demo involving just a couple of page loads was our reward for spending pretty much the whole day trying to work this out. :-)
We observed a few interesting things from this process. First, there's a fair bit of work involved to get a token, but once it's done you get to skip most of it for subsequent requests for the private post. I found the callback process to receive the token interesting, there's not much information in the request about who the token is coming from. There is enough information though, as the callback includes a state parameter which I initially generate. I need to store all the information about the private post I'm accessing when creating the state parameter, so that I know who to associate the token with when it gets returned.
IndieWeb Summit 2019 was great and I don't think we would've been able to get through AutoAuth in a day without having such an awesome group of people to talk to!