Credit card regional licensing enforcement in iTunes
Repeatedly when I turn to this blog, I share my thoughts on the complicated issue of media licensing in the digital era. I’ve noticed a disconcerting new development over the past year that really needs flagging, yet I have not seen it discussed anywhere. That development is online retailers’ reliance on your credit card issue information to enforce regional licensing restrictions.
The issue first came to my attention in the early fall of 2015 when I tried to purchase music via us.7digital.com and was told I could not complete the purchase using a credit card issued outside of the country of purchase (i.e. the US). Granted, I was using the proxy loophole to purchase my content, but the music I was after was not (yet) available on de.7digital.com and I had no evidence that it ever would be, so I was merely looking for a way to get hold of the music while supporting the artist. Shortly thereafter, a colleague, likewise situated in Germany, mentioned he experienced the same problem ordering a DVD from amazon.co.uk, a place I sent a lot of my savings as an undergrad.
Now this is annoying enough, but it is really just another extension of regional licensing. However, living in Germany, the change has meant that I can no longer use websites like 7digital at all unless I overhaul my banking services. Since I got my credit card long before I moved to Germany, it was issued in the Netherlands, so de.7digital.com won’t sell me its products. Yet because I am in Germany, nl.7digital.com will not sell me its products either, unless I use the proxy loophole.
Until recently, PayPal could still be used across borders; now it appears it too will not allow me to purchase music from German websites, presumably because I have linked PayPal to my Dutch checking account, though the error message I get does not specify. The last time I tried to change my country of residence in PayPal, incidentally, it would not let me, so I had to create a new account with a different email address. Though that was years ago and they may have improved their services since, I suspect they haven't, since they keep sending out these country-specific updates to their terms of service. The only reason I can still use amazon.de is that Amazon Germany has a direct debit option and I have a German checking account.
Unlike in North America, where the main point of credit cards is the line of credit, people on the Continent generally register credit cards for two reasons: to make online purchases and to use when abroad. Now that my ability to do these things is being incrementally compromised, I really have no more reason to go on paying for banking services outside my country of residence, but since that's changed a fair bit over the past decade while I supposedly live in the same open market and monetary region where my credit card is registered, that seems unnecessarily cumbersome.
iTunes licensing scheme
There is a new album out, available on de.7digital.com but not amazon.de. It’s available on iTunes, though I don’t normally use iTunes because it doesn’t run natively on my operating system and I generally don’t see the point of installing software just to be able to make purchases. (Also, iTunes music is sold as a licence to a single device, with the option of “syncing” to your other devices — is this a reference to their cloud service, so I can’t actually use my music?) I’ve dug out my old Windows machine to try and make the purchase on iTunes, and I’m pretty sure I’ve completed the process, but no download link is showing up. I believe iTunes is set up to charge me via my phone bill, because they won't take my Dutch credit card either, so it’ll be a month before I find out whether I have in fact spent the money. It’s looking increasingly as though in order to get hold of this music, I’m going to have to prove that the eurozone is not a single market by changing the country in which I do my banking.
Under these circumstances, one almost feels forced back into the 1980s radio–cassette deck model of recording music from Spotify, but then that doesn't really help the artist. After so many years of technological development, it sure is surprising to be thrown back to the days of taping radio broadcasts.
I have reportedbefore on my struggles to bring XeLaTeX into the humanities publication model. Although I’ve found one journal fully prepared to take my .tex copy (and accordingly my tikz stemmata too!), .docx remains the standard. Below are some notes on my ongoing adventures.
Unsurprisingly, my publisher did make me submit my book copy in .docx. The difficulty with these things is that there is no straightforward means of conversion, which is why I had attempted to get publishers’ policies on this before setting out to write my submission draft. However, publishers typically don’t have reliable information to offer until they’ve decided they want to publish your work, by which time it is too late.
After a few tests, I settled on pandoc as the best .tex-to-.odt conversion tool. Most of the text body and nearly all of the document structure in fact came out fine; I seem to remember there were one or two minor issues which I could have avoided if I had got rid of a few more custom commands ahead of conversion. The big issue, however, was biblatex. Although pandoc can load external style files, it cannot load LaTeX citation styles, so I would have had to spend weeks rebuilding the desired citation style almost from scratch. I couldn’t actually get existing stylesheets to work, either, but I wasn’t going to invest much time given that there didn’t seem to be a Chicago Manual of Style-compliant stylesheet readily available.
Accordingly, I settled for the inconvenience of manually pasting my footnotes and bibliography into the .odt from the pdf output file and manually removing pdf line breaks, restoring italics, and restoring smallcaps. An arduous, frustrating, and error-prone task, but it sure beats doing the same for the text body as well.
I then ran into a surprising shortcoming on the part of the major word processor file formats: verse line numbering. Already in LibreOffice, I found that there is only one central configuration for line numbering, which is then turned on or off for certain paragraph styles or individual paragraphs. That may sound sufficient, but since my book quotes scores of longer and shorter verse passages in various verse forms across five languages, my XeLaTeX draft had made full use of local configuration options to indicate not just intentation but also whether to display line numbers every three lines or every five. LibreOffice makes you pick one or the other. It also has a further problem with line numbering, as will become clear below. With stanzaic verse, I had configured stanza numbers to appear at the head of the stanza instead. LibreOffice does not support stanza numbering. In addition, I believe I felt rather limited by the horizontal positioning of the line numbers: the editor restricts them roughly to the document margin, so if you display verse with indentation it is not possible to get the line numbers comfortably close to the text.
When I got to Microsoft Word 2013, however, I found that the situation was much worse: its file format apparently does not allow you to restart numbering for new passages, so all my verse passages appeared with continuous line numbering. The horizontal positioning issue was also worse here. Needless to say by this point, Word does not support stanza numbering either. Accordingly, I had to give up and manually write in the line and stanza numbers for all my verse passages by hand, requesting the production department to input them anew. I made a mental note to check my proofs for numbering errors.
As it happens, though, I’m pretty sure my book was typset in Microsoft Word. Accordingly, the typesetter did not add my line and stanza numbers back in but instead added parenthetical notes indicating the range following each quotation. That’s pretty pathetic: verse display was perfect in my own draft, but the use of Word has crippled it to a self-publishing standard. Mind, my editor and copy editor-cum-typesetter have been absolutely fantastic, so I don’t mean for this to reflect on their professionalism in any way; instead, it reflects negatively on humanities publishing as an institution, which cannot afford to train editors to use more advanced software.
Though my experiences with Leeds Studies in English have been excellent, the standard in humanities journals is still far from ideal. Most recently, I have had to write a couple of verse text editions for journal publication. This brings two challenges with it for your typical word processor: the abovementioned line numbers and the critical apparatus. The first article actually involved a transcription without an apparatus, so I only had to deal with the line numbers. Interestingly, it never even occurred to me to simulate true line numbering by entering numbers and tabbing across for the line; I have to admit the notion rather offends my sensibilities. So instead, I wrote the article in LibreOffice with the idea that I would send them the pdf output as a guide to use with my .docx copy. I soon ran into a few more problems, however. Because my texts are newly discovered witnesses to known poems, and there are several instances of omitted or repeated lines, my text has a few irregular changes of numbering. Although LibreOffice allows a change of numbering in any place, it still only allows one interval for the entire document; so if my line 78 is followed by a repeat of 75–76, the interval will show “75” and again “75” five lines down, but it will be unclear how the numbers in between are to be numbered, as I cannot change the numbering frequency to display their numbers. I therefore printed the document, wrote in additional numbers as required, and scanned the resulting document to pdf for submission alongside the docx copy for guidance.
The problems with LibreOffice do not end there, however. For some reason, although I can number lines in this program, it will fail to reproduce them if I close the file and then reopen it. I have not investigated in depth, but it seems to have something to do with line-end characters: the numbering works only if I shift-enter instead of do a hard return, but if I then save, close, and reopen the document, my lines are back to hard returns.
When I got around to thinking about the critical edition, I realized my problems were now compounded by the need for a critical apparatus. How does one do this in Word? A bit of googling suggests people use footnotes. This means the apparatus will be mixed in with regular footnotes, and the production department of a given journal will do with that what they can. Not ideal.
I ended up writing the prospective journal to ask them for guidance. I did not think there was any chance they’d let me use XeLaTeX with ledmac, but at least I wanted to format my Word copy in accordance with their preferences from the outset. Tabbing in my line numbers and entering my apparatus entries as footnotes is indeed what they recommend; the production department will then convert it using some other platform (Adobe InDesign?). Feels wrong, but it’s how things are done.
The email autoresponder has well outlived its justified lifespan.
I value email and encourage its continued use. I am annoyed when people send letter-length messages via Facebook, as there is no sensible way of importing that type of mail into my email storage folder. In fact, Facebook’s web interface doesn’t have a content search function that I’ve been able to find, which puts it well beyond me how this company got to be so successful. Then again, the users who have made it successful are the same people who voted today’s politicians into power.
But there are certain technologies from the nineties that just have to go.
The email autoresponder is hopeless. The way most professionals use it, it’s either on, suggesting you won’t be answering email, or off, suggesting nothing really. But virtually everyone with an active autoresponder ends up getting back to me in the expected amount of time, and many respond instantly. Then if you write back and forth a few times, you get clobbered with meaningless autorespond messages. Painful.
It makes sense in a business context to let people know when you are going to be slow to respond, but we’ve called that a status since the days of ICQ. Email is not in need of replacing, but it should get a pre-send handshake element built into it. As soon as you enter a valid value into the recipient field, your client should read out the recipient’s status from their mail server. The returned value should then appear visible in the client window. Clients could then even be configured to prompt an “are you sure?” confirmation dialog when you hit “send.” Problem solved, in a far more elegant way.
My previous post discussed a few options for formatting teaching slides. Left out of that discussion was perhaps the most obvious choice, LaTeX’s beamer class. I’d seen it mentioned, but I hadn’t had the time to test it properly. I have now spent some time with it in preparation for the summer semester. Here are my findings.
The obvious advantage of beamer over HTML is that it automates some of the formatting while still using a logical structure at the input end. Effectively, it is as though someone has already written the CSS for you. This is great when providing straight text slides, which come out fine. I was a bit surprised that setting a default serif font and changing the beamer font theme to serif did not change table and list content, however. I had to resort to the ugly trick of loading my serif font a sans-serif font instead. beamer comes with an okay default theme that can be modified endlessly.
What had me fed up with straight HTML-to-projection in the end was the fact that frame height cannot be dependably fixed unless you output to PDF. A beamer file has to be output to PDF, so this is no longer an issue — but this is hardly a real advantage over HTML.
There’s a \pause command, in case you go in for that sort of thing.
Positioning graphics is a pain. Since word-wrapping is not a straightforward option when inserting an image, I end up creating a separate column for the image much of the time.
As with HTML, it takes trial and error to find out how much text fits inside a single frame. Unlike with HTML, you have to recompile to see the result of each tweak.
Embedding web content! is of course impossible. I embed YouTube videos and Google Maps into my teaching slides sometimes, but I’ll have to keep drafting and projecting my slides in HTML to make that work. Hmmm.
I began teaching with HTML slides during a 2011 summer course. Previously, I had used PDFs generated from an OpenDocument source, but PDF can be a little cumbersome and inaccessible.
The draw of HTML is manifold. In terms of accessibility, one can specify a print version in the stylesheet, replacing colours with various styles of underlining, for instance, so students can print off your slides without losing any information or clarity. Navigationally, HTML allows one to tab back and forth between multiple sets of slides, yielding a two-dimensional navigational experience instead of the typical one-dimensional setup. This is great for grammar teaching especially, since it allows me to jump between different aspects of the language, and between theory and example sentences. In terms of work flow and philosophy, the end product of HTML equals the source, so the contents remain transparent while one can directly modify the product; also, the material is easily exported for use in other forms (especially web content) as well as converted to comply with future standards, so data obsolescence is arguably less of an issue. Finally, I use older hardware, on which PDF sometimes yields a sluggish experience.
This term has marked the first full semester that I’ve relied entirely on HTML slides across multiple courses. It’s been okay, but the experience has not been as good as it was in my 2011 setup.
In 2011, I displayed my slides in Opera. If Opera’s paginated slides mode (the tech world speaks of “paged” content) was available at the time, I wasn’t aware; I did know they were working on this implementation of the concept at the time. I worked with Firefox when not teaching, however. One reason for using a different browser for my teaching was to have a custom set of bookmarks (and cache) at the ready. Most importantly, however, Opera has a switch to turn off scrollbars entirely, which in Firefox is only possible by tweaking userChrome.css, i.e. not entirely on the fly, and I do occasionally want scrollbars when browsing for my own purposes (I did try to go without for a good while). What came to bother me about Opera was actually just the fact that I was more used to the implementation and response of Firefox’s Find function and other shortcuts, so I didn’t feel as confident locating material while teaching, when every second counts.
This term, I have been using Firefox for my teaching, with a custom userChrome.css to almost hide the vertical scrollbar. I only learned about Opera’s slides mode in the course of the term, so I’ve never actually tried it. As I understand it, their implementation is convenient, but not great because the effective real estate is unknown, so you are expected simply to keep content to a minimum. That won’t cut it for me: I need to display a fair bit of grammatical information, so I need to know exactly how much will fit on a screen. As it happens, I have been using the same CSS recommended for Opera’s slides function, except I define slide height, like so:
This means I need to know the exact number of pixels that follows when you subtract page overlap (the sliver of content that remains in view when you press PageDown) from total visible vertical space, and this of course depends on your window manager setup as well as your browser and display resolution. This means it’s not portable, and I need to be able to set LVDS to the same standard resolution in every classroom. It seems classroom projectors are beginning to catch up to the 16:9 aspect ratio but not all have, so this yields mixed results at present. Also I have occasionally had xrandr error messages and been stuck in a higher resolution, yielding an ugly page and half visible at a time. Finally, it’s happened a few times that a new file running off the same stylesheet will require a different page-height than the last week’s set of slides, and I am unable to figure out how that is possible. The result is that every new slide appears some ten pixels further down, until it begins to cut off. This is unacceptable, though I imagine it must be due to human error on my part somehow.
In December, I gave a paper at a conference where I wouldn’t have access to my own hardware. Obviously, I couldn’t serve up my HTML slides straight here. Since I’m used to drafting various kinds of documents in HTML, however (I use markdown to draft my less formal documents only), I still drafted in HTML, embedding a good number of vector graphics using the <object> tag, then ran prince to convert to PDF. I must say I like the result, and it still seems more efficient than exporting from ODF, not to mention less of a pain to draft.
Now I’m thinking I might switch to PDF-from-HTML slides for my teaching, too. It seems unprofessional to host slides separated by an amount of whitespace that means nothing to the students, even if I bypass it in the print CSS. My main objection to PDF remains the fact that the colour coding I use so frequently won’t be visible if students print off the PDF on a monochrome printer. The ability to define a separate stylesheet for print remains one of the biggest benefits of HTML in this context.