GSoC: The Webserver

June 19, 2016

This week I have upgraded the StorageWizardDialog. Instead of one big text field for code, we decided to put eight small text fields. The scummvm.org page shows the code you should enter, and each code group correctness is checked separately.

ScummVM also automatically checks that the whole code is correct, too. When you enter a valid code, «Connect» button becomes active and you can get your cloud storage working.

However, we don’t think that 40 symbols long code is an easy thing to enter. We’re working on different ways to simplify that task, and one of them — probably the ultimate one — is not to enter that code at all! To achieve that, we’re adding a local webserver into ScummVM. For now, it looks like this:

When you open the StorageWizardDialog in the ScummVM built with SDL_Net (i.e. with local webserver support), there are no fields. Only a short URL for you to visit.

When you do, you’re redirected to service page, where you can allow ScummVM to use that service’s API. When you press the «Allow» button, you’re automatically redirected to ScummVM’s local webserver’s page — and ScummVM automatically gets the code!

I’ve also done a few updates of cloud sync procedure, so now it automatically started when you connect to a storage, or when you save the game or when the autosave happen. The last time I said that if you decide to cancel the sync, the currently downloading file would be left damaged. Well, not anymore — such files are automatically removed.

Finally, we came up with the plan! :D

First, some of my GUI dialogs require more space than ScummVM can provide. So, we decided that I should implement a «container box» widget — an area with scrollbars, which would be providing more virtual space than we really have.

After I do that, I’d start working on cloud files management: users would be able to upload and download their game data right through ScummVM interface.

Then we’re going to add a «Wi-Fi sharing» feature. It basically means that local webserver would be also used to access files on the device through a browser. No more USB cables! You can copy files from your phone by clicking a link in your browser or upload files onto your phone with that.

Then comes the next cloud storage support — Box. I know nothing about it yet, but hopefully it should be similar to those three I already added.

And in the end, I’d work on those things that would make it easier to enter the code if your ScummVM doesn’t have the local webserver feature. Those are browser opening and clipboard support. So, ScummVM would be able to automatically open a browser for you to allow it to use storage’s API and you’d be able to copy the code and paste it in the ScummVM’s text fields. These are interesting features, but as far as I know, those are highly not portable. For example, there is ShellExecute on Windows and special Intent on Android, but no such thing in Unix. And not all platforms even have the clipboard. From the other hand, there is SDL_GetClipboardText()... in SDL2.

Anyway, I’ve got one (and probably the most difficult) exam to pass the next week. Yeah, an exam and the midterm at the same week. I hope I’ll be able to do at least that «container box» :D

GSoC: The GUI Week

June 12, 2016

So, this week I worked on the GUI mostly. That Cloud Options tab is almost fully functional — we want to modify Connection Wizard a little bit, so it helps users enter the code (which is quite long, about 40 characters). Apart from that everything in the tab works as it should, so you can easily switch between your connected storages, see how much place your saves occupied on your cloud drive and when the last successful saves sync was. The Connection Wizard works too, yet, as I mentioned, it won’t notify you of typos. We’re working on corresponding scummvm.org URLs, and these work quite fine.

I’ve also added Google Drive support this week. As Google Drive uses file ids instead of paths, that wasn’t a very pleasant adventure. I mean, imagine you want to download a file knowing a path to it. However, you can’t just ask to find that file because there are no paths in Google Drive. So, you have to divide the path by separators, start from the Google Drive root folder and implement path traversal by listing the directory, finding a corresponding subdirectory and repeating these steps. And, as there are no paths in Google Drive, there could be files with the same name! So, there could be two ScummVM folders or three Saves folders or five soltys.000 files. Well, I just use the first one’s id, so if you’d try to mess up with the Google Drive’s ScummVM folder — you’d succeed. Also, as we wanted to give more freedom to the users, we had to ask for the whole Google Drive access, because application data folders are completely hidden from users. ScummVM only creates its own folder in the Google Drive root directory, so fear not!

Finally, I worked on the progress dialog for the save/load manager. During the saves sync some slots could be unavailable because they are being downloaded. You’d see the following dialog in this case:

Most engines use the common pattern for their save files, so I called these engines «simple». If it’s «simple», you can press that «Run in background» button and you’d see all the slots. Some of them would be «locked» and would lack the thumbnail, but all the others would be easily available to save or load. Unfortunately, there are also «complex» engines. I was unable to implement that «locked» slots feature for these because of their complex nature, so «Run in background» would be unavailable. You would have to wait until all saves are downloaded, and only then save/load feature would be available. You can always hit the «Cancel» button, but that could leave the currently downloading file damaged, so use it on your own risk =)

I’ll have to pass my exams the next week, so I already started working less. The next week plan is to upgrade that Connection Wizard dialog and start working on the local webserver feature. (Yeah, I don’t want you to enter that code when ScummVM could do it for you instead.)

GSoC: Week 2 — The Cloud Icon

June 5, 2016

Not sure what was the original week plan, but I managed to finish Dropbox and OneDrive storage implementations and add the saves sync feature. And that was a midterm milestone in my proposal plan (mostly because I tried to schedule all the difficult work after midterm as I’m going to have my exams very soon — right before the midterm).

So, now ScummVM knows how to upload and download saves from Dropbox and OneDrive and automatically does that on launch. When I did that, I decided to work on some GUI-related tasks: add an indication icon that is shown when active storage is syncing something, for starters. Well, that took the whole week :D But now it’s an extra nice icon in the corner which automatically appears and disappears when needed. And it’s pulsating when active!

Now I’d work on the GUI further: add a sync progress dialog in save/load manager and forbid using save slots which are being synced. In case I wouldn’t know what to do with the GUI, I can always work on Google Drive storage implementation.

UPD: by the way, I've updated the icon, so now it's completely created by me.

GSoC: File downloading week

May 28, 2016

According to my proposal’s schedule, I should’ve been working on Storage interface, token saving and Dropbox and OneDrive stubs this week. I’ve tried to schedule less work before the midterm because I’ve got some exams to pass. Still, I managed to do most of that first week plan with my preparation work. So, I’ve finished that by Tuesday and started working on the things planned for the second week: files downloading.

I’ve also implemented DownloadRequest class that Tuesday. It just reads bytes from NetworkReadStream and writes them into DumpFile.

There was some small problem with DumpFile: it fopen’s a file (creating it, if it doesn’t exist), but it assumes that the file is located in some existing directory. When you’re downloading a file from the cloud into your local folder, you might not have the same directory hierarchy there. Thus, fopen fails and no download occurs. As there was no createDirectory() method in ScummVM, I had to implement one. So, now DumpFile accepts a bool parameter, which indicates whether all directories should be created before fopen’ing the file.

Then we’ve decided I should «upgrade» our Requests/ConnectionManager system by adding RequestInfo struct and giving each Request an id. User could use an id to locate the Request through ConnMan and ask it to change its state. That’s mostly needed when we want to make the same request again: for example, if some error occurred, we probably would like to try again in a few seconds. This system added RETRY state for Requests, so user’s code can easily ask to retry Request. The original idea with RequestInfo and ids was rethought, so now we’re actually using pointers to Request instances. That way one can easily affect Request: cancel, pause or restart it with its methods.

With that system I could easily implement the OneDriveTokenRefresher. That might be not the best class name, but it describes it quite well =) OneDrive access tokens expire after an hour, so that’s very sad to get an error because you’re working with the app more than an hour. This class wraps the usual CurlJsonRequest (which is used to receive JSON representation of server’s response) to hide errors regarding expired token. Basically, it just peeks into received JSON and, if there is an error message, refreshes the token and then tries to do the original request again, using the new token. With that new system, it just pauses the original request and then retries it — no need to save the original request’s parameters somewhere because they are just stored along with the paused request.

After I finished that useful token refreshing class, I’ve implemented OneDriveStorage::download(), thus completing the second week plan too. The plan also has «auto detection» feature, which is being delayed a little because it would be called from the GUI and I’m not working on the GUI yet. Yet, there was no folder downloading in my plan, and it’s obviously a must have feature, so that’s what I did next. The FolderDownloadRequest uses Storage’s listDirectory() and download() methods, so it’s easy to implement and it works with all Storage implementations which have these methods working.

I actually implemented OneDriveStorage’s listDirectory() a little bit later because there is no way to list directories recursively in OneDrive’s API. There is such opportunity in Dropbox, so it takes me one API call to list whole directory and all its subdirectories contents. To list OneDrive directories recursively, I had to write a special OneDriveListDirectoryRequest class, which lists the directory with one API call, then lists each of its subdirectories with other calls, then their subdirectories, and so on until it lists it all.

I’ve been asked to draw some sequence diagrams explaining all these Requests/ConnectionManager systems, so that’s what I’m going to do now. I’m not a fan of sequence diagrams and I’m going to draw them in my own style, which, I believe, makes them simpler.

UPD: here they come, two small diagrams (available on ScummVM wiki too):

File sync comes next week and I guess I’m going to discuss that feature a lot before I actually start implementing it.

GSoC: Work starts tomorrow

May 22, 2016

(At least, that’s «tomorrow» in my timezone.)

So, the last time I wrote about doing some preparation work before GSoC starts and I believe I did quite good.

JSON parser works fine, even though we have to replace all unicode characters into ’?’ to make it work.

Now I’m checking that my code compiles not only with MSVC, but also with gcc through MinGW shell. Thus, I have tested the configure changes sev made, and libcurl detection and usage is successful.

There is a small «framework» for libcurl already: I’ve added ConnMan (ConnectionManager, similarly to ConfMan), which could be used to start curl requests. There is a special CurlJsonRequest, which reads the whole response, parses as JSON and then passes the result to its caller. Finally, I’ve commited Callbacks yesterday. I still keep those in a separate branch, because some edits might be necessary, but I believe it to be a very good thing to have.

These Callback classes are not just plain pointer to function stuff. It’s «object-oriented callbacks», meaning it’s actually «pointer to an object and its method». Plus, I also made it so one can specify an argument type. Yes, those are templates. I actually remembered, that I already did such thing a few years back while I was at school. «Functor» might be a wrong name to call it, but the idea is still the same: there is a base class, which doesn’t know anything about the class, whose method we’re going to point to, and there is a derived class, which implements it. We’re using pointers to the base class, so we don’t care whether it’s Callback<A> or Callback<B> or SomeOtherCallbackImplementation. Thus, we don’t have to write all these ugly global function callbacks, cast the only void * parameter to some struct and use fields of this struct to do the work. We just write Callback<ClassName> and pass ClassName::method, and so this method would be automatically called when this callback’s operator() is called. If we want to specify, that ClassName::method accepts not void *, but, for example, AnotherClass, we just write Callback<ClassName, AnotherClass> (note that it accepts not a pointer, but an object itself). It’s as simple as that!

I also wanted to do something about Storages and save corresponding access tokens and such into configuration file. Well, right now it remembers the access token of the «current» storage, and thus I’m not authenticating every time I launch ScummVM. Yet, I’d like to remember tokens of all connected storages, so user can easily switch between those.

Finally, there was a «writing API skeleton» in my plan. Well, this whole cloud system works as I thought it would. There are no real API method implementations, but apart from that it looks fine.

OK, so I’d have to start working next week. The general plan is to design API and implement Dropbox and some other provider support. My proposal schedule states the following:

May 23 — May 29
Write Storage interface, implement tokens saving.
Add Dropbox and OneDrive storages.

May 30 — June 5
Implement file downloading (Dropbox and OneDrive).
Add auto detection procedure running after the download.

Storage interface is more or less there, token is saved. «Add» there doesn’t imply storage would be completely functional, so adding Dropbox storage is also done.

So, the actual plan is to upgrade Storage to have some config saving related methods and make tokens saving feature support multiple Storages. After that, I guess I’d add OneDrive stub and start implementing some API methods in Dropbox and OneDrive backends.

I’m still having university studies here and exams are getting close. This means sometimes I’d have to study instead of work, and I might end up behind the schedule, not ahead of it. And that means I’m going to have no weekends closer to the midterm and after it =)

1 2 3 4