Its used for both, but I don't think you can directly search for stuff using the source: special tag. If its from a magazine or artbook and there's alot of them pooling them is the best idea.
http://moe.imouto.org/post/show/10358/fixme-tinkerbell-tinkle-tsukiyo-chakai's image comes up with a 301 Moved Permanently for me (pointing back at post/show/10358), even immediately after loading the post.
Hmm, I'm not seeing that happening to me, I'm using firefox 3.0 beta2petopeto said:
http://moe.imouto.org/post/show/10358/fixme-tinkerbell-tinkle-tsukiyo-chakai's image comes up with a 301 Moved Permanently for me (pointing back at post/show/10358), even immediately after loading the post.
I was testing with wget/curl in a different window. It looks like there's some new Referer check. That image URL works in wget when I pass a --referer manually, and 301's me when I don't. I'm sure I've wgetted single images like that before here successfully, so I'm guessing that was just added to replace the dynamic URLs ... wget does send a referer automatically when it's downloading recursively, but not when given a single image URL to download.admin2 said:
Hmm, I'm not seeing that happening to me, I'm using firefox 3.0 beta2
Not displaying in FF (2) is probably because that's an unusually large image, which isn't anything new.
Suggestion: when checking referer headers, accept the lack of a referer as valid. It still prevents hotlinking to other domains, without affecting standalone downloaders like wget.
Ah yeah that, undocumented recent change, its for expired urls to redirect to the original post, but it also effects some hotlinking, its not too hard to figure out the right referer so I'm just going to leave it how it is.
What are the dynamic URLs for? On the user side, it seems like they just break browser caching, which is sort of painful with this site's large images (I'm always cancelling page loads after making edits).
To prevent people from using a mass leeching script/bot/program, before the dynamic urls we were pushing 8MB/s constantly on one server, now we're pushing 6MB/s across two servers. I am thinking of removing it soon to see how well the servers can take the load.
Spent about a hour debugging the cache, hopefully it works now >_>; (urls are now valid for 30min)
Here's one approach: tie the dynamic URL to the time. Take the request time, subtract 20 minutes, and round down to a multiple of 20 minutes to get the beginning of the period. Add 60 minutes to that to get the expiration. This means that an image URL is valid for 20-40 minutes. The dynamic URL would be an encoding of the start time, so a request made at 9:35 and one made at 9:37 would get the same URL.Shuugo said:
Making urls that can be reused during 5 minutes would be very difficult to implement (or so I think imho) and would be rather buggy since it would depend on response times and other stuff
Example: a request made at 10:45 would get a URL valid from 10:20 to 11:20. A request made at 11:00 would get one from 10:40 to 11:40. One made at 11:10 would get the same one as the one returned at 11:00.
Most importantly, most short-term page reloads (from edits, etc.) will get the same URL, so browser-side caching will work again.
(If you reload the page across a 20-minute boundary, the URL will change and it'll still reload, even though the old URL is still valid. No big deal.)
Of course, turning off dynamic URLs is even easier. Just tossing out an algorithm in case that doesn't work out.
cant use source link to import stuff
all i get is
1 error prohibited this record from being saved
There were problems with the following fields:
all i get is
1 error prohibited this record from being saved
There were problems with the following fields:
- Source couldn't be opened: getaddrinfo: hostname nor servname provided, or not known
The dns is dead, give it a few hours and it'll be working
i wonder why its not possible to edits ones post comment...
Just delete it and repost.
also dns is up again, switched to back up ones
also dns is up again, switched to back up ones
Anything of interest changed during the down-time?
Sorry for the long downtime, trying to figure out how to delete a HUGE folder. (inode size 93747712)
Was not successful..oh well
Was not successful..oh well
Urls to images change every 30min, with this change images are now cached by the browser.
http://moe.imouto.org/post/index?tags=nise_midi_doronokai finds 4 posts; if I search for "nise_midi_doronokai loli", I get the other 2. I don't know if this happens on db, too, since I don't have a priv account there to test.
It looks like of the 4 posts only 2 of them have the loli tag, so if you use nise_midi_doronokai loli it'll only show the two pics with those tags.
but there are 6 images ;)
http://moe.imouto.org/post/index?tags=loli
cant see anything on that page btw
edit: oh lol it starts at page 3 ?!
cant see anything on that page btw
edit: oh lol it starts at page 3 ?!
Hm caching is broken~ yay~
and half assed fix is done, loli is now shown to everyone, if you don't like it blacklist it in your profile.
more caching bugs squashed.
and half assed fix is done, loli is now shown to everyone, if you don't like it blacklist it in your profile.
more caching bugs squashed.
Caching is set back to guests
http://moe.imouto.org/post/index?tags=tamura_hiro lists 1 post, but the only post is a deleted one (post #12833) and it shows an empty page. Had to use the API to find which post was doing that ...
(It was actually doing it for "vocaloid", but it fixed itself when I posted another vocaloid picture, so I picked a different tag.)
(It was actually doing it for "vocaloid", but it fixed itself when I posted another vocaloid picture, so I picked a different tag.)
Following danbooru's trunk, rq added in piclens support I haven't tried it but it should work
https://addons.mozilla.org/en-US/firefox/addon/5579
https://addons.mozilla.org/en-US/firefox/addon/5579
It's not working for me yet.admin2 said:
Following danbooru's trunk, rq added in piclens support I haven't tried it but it should work
https://addons.mozilla.org/en-US/firefox/addon/5579
It looks neat (looking at the demo on their site using the plugin), but it's got to be bandwidth death here. It seems like you'd need mid-resolution semi-thumbs for it to be practical; eg. resized down and JPEG'd to 3-400k or so.
Should be working now, missing file has been added by rq..
Went back to static urls, see how bandwidth it'll take.
Works for me now. It looks like it uses the thumbs, which at least makes it practical (but not pretty. Sort of neat for scanning through *lots* of thumbs quickly.
When I use it on danbooru, when I click on a picture, a little progress meter shows up and then it shows the full image. Here, it just shows the thumb. Maybe that's not in svn yet.
Still, would need a mid-resolution file for that to be much fun, since most pictures here take a good 20-30 seconds to load.
ed: it works on my local 'booru, and I tried with an image that didn't work on moe, so I don't think it's just a resolution problem. hmm..
Still, would need a mid-resolution file for that to be much fun, since most pictures here take a good 20-30 seconds to load.
ed: it works on my local 'booru, and I tried with an image that didn't work on moe, so I don't think it's just a resolution problem. hmm..
petopeto