Page 1 of 9 123 ... LastLast
Results 1 to 15 of 121

Thread: nzbget - binary newsgrabber

  1. #1

    nzbget - binary newsgrabber

    Hi guys,

    I proudly present you the new version of nzbget - 0.3.0.

    NZBGet is a binary newsgrabber, which downloads files from usenet based on information given in nzb-files. NZBGet can be used in standalone and in server/client modes.

    New version has many improvements including fast yEnc-decoder, par-check and repair.
    It runs very well on WL500gP (in contrast to previous version). Moreover this device was the primary development-target for new version.
    You can read more info on nzbget on its web site and sourceforge project page.

    The new version is now available in a package repository. You need to use oleg-branch. A swap-file is also highly recommended.

    To install:
    Code:
    ipkg update
    ipkg install nzbget
    It will install a samle configuration file under /opt/share/doc/nzbget/nzbget.conf.example. Please rename it to .nzbget and put into your home directory. Alternatively the config file can be stored under following locations:
    /etc/nzbget.conf
    /usr/etc/nzbget.conf
    /usr/local/etc/nzbget.conf
    /opt/etc/nzbget.conf
    Choose the one you like the most.

    You need to set at least the option "MAINDIR" and one newsserver in configuration file. Please note, that the MAINDIR must exist, it will not be created automatically.

    For usage description please refer to web site and to help screen "nzbget -h".

    Basically you start a daemon first:
    Code:
    nzbget -D
    To add a new download:
    Code:
    nzbget -A /path/to/you/file.nzb
    To list current download queue:
    Code:
    nzbget -L
    To see the server output (remote mode):
    Code:
    nzbget -C
    or (to directly use curses-output, if it was not selected in configuration file)
    Code:
    nzbget -o outputmode=curses -C
    Please let me know, if you have any problems installing or running nzbget.

    Best Regards,
    hugbug.
    Last edited by hugbug; 30-11-2007 at 10:49. Reason: typo corrected

  2. #2
    Hmmm interesting and thanks, I didnt spot that this was out.

    I might try this on my Linkstation (I don't have the WL500g but this is a good forum and similar to NAS-Central in getting ideas for things to try)

    Is there any way to remotely administer or monitor it (web interface) ??

    Have you compared the CPU and memory usage with HellaNZB?

    Does it have any dependencies (Hella needs Python etc)?

    Do you use the faster yEnc module like Hella does?

    Is this compiled for ARM or MIPSEL (Linkstation is ARM)

    I'd certainly try it....

  3. #3
    methanoid
    Is there any way to remotely administer or monitor it (web interface) ??
    The server can be administered remotely. The built-in console-client allows to monitor of log-messages, current state of queue, current download speed. It can also edit the download queue (pause/unpause, move, delete items). There are three output modes, the most advanced is ncurses (use "nzbget -o outputmode=curses -C" to start a client in this mode), on Windows it looks similar to Far or Midnight Commander (console interface).
    The server and client is the same executable, where the running mode depends on passed command-line parameters. To add a new file to download queue a separate call needed (nzbget -A filname.nzb) or you just can put a nzb-file into nzbget's monitoring folder.
    NZBGet (server and client) can be compiled and run on many platforms, including Linux and Windows.
    You can download windows-version from NZBGet-site. Then you can use windows-version to monitor downloads on your Linux-server.

    Note: after version 0.3.0 was published, I found, that I did not pay an attention to the fact, that endianness affect data transmitted over network. The clients and server of version 0.3.0 are compatible only if they use the same endianess. Since x86 and mipsell are both little endian, it works. ARM-processors can work in both modes (little and big endian), depending on OS used.
    This compatibility issue was addressed and already fixed in source code (svn-repository on sourceforge.net), but a new release (probably 0.3.1) is not published yet.

    Have you compared the CPU and memory usage with HellaNZB?
    NZBGet uses very little CPU power. On my WL500gP with 2 MBit/s connection the top-command says something like 60-80% cpu idle time. Mem-usage is not so low as it could be, because the program maintains the complete list of articles in download queue in memory. The typical usage of swap is not higher as 10 MB.
    I did not used Hellanzb heavily and therefore can not make a good comparison. I just can say that in my quick tests the cpu idle time reported by top was about 5%.

    Does it have any dependencies (Hella needs Python etc)?
    NZBGet is written in C++. It needs following packages (for oleg-branch of optware): ncurses, libxml2, libpar2, libsigc++, libstdc++, zlib.

    Do you use the faster yEnc module like Hella does?
    NZBGet has a very fast internal decoder for yEnc.

    Is this compiled for ARM or MIPSEL (Linkstation is ARM)
    The optware-repository from nslu2-linux.org has nzbget compiled for many targets. Which one do you use, what your /opt/etc/ipkg.conf says?
    Last edited by hugbug; 11-12-2007 at 19:45. Reason: There are FREE output modes -> THREE

  4. #4
    I've also used nzbget to download a few nzb's and it does it's job nicely.

    However I do miss the automatic unrarring.

    And because hellanzb still maxes out my connection (450 kbyte/s), I'll stick to that for a while.

    A few suggestions which would be nice in a future version:
    - automatic unpack
    - par2 verify 'on the fly' so that each .rar gets checked right after it has been downloaded (like newsleacher), instead of checking all the rars when the whole nzb is downloaded. This shoud speed up the proces.

  5. #5
    automatic unpack
    I did not develop a good unrar-strategy yet. If nzb-collection contains more than one archive, I need to find this out to unpack all of them, before I delete rars.
    However nzbget can execute a user-defined post-process script after download of nzb-collection is completed. The unraring could be done in this script.

    par2 verify 'on the fly' so that each .rar gets checked right after it has been downloaded (like newsleacher), instead of checking all the rars when the whole nzb is downloaded. This shoud speed up the proces.
    Libpar (and par2cmdline) has no ability to save/reload par-check-state (like quickpar does). If we want to hold the state, we must keep parrepair-object alive (this is what nzbget does during waiting for extra par-blocks). If the download queue was rearranged (for example if new download was added to the top of queue), we need either to create another parrepair-object for a new nzb-collection (and allow the first object to keep used memory) or stop the first object and lost par-check-state.
    This makes the "on the fly"-thing not easy, but if we make it adjustable (something like an option for "max keeped objects") it might work.

    I will keep both features on a wish-list for next releases.

  6. #6
    Quote Originally Posted by hugbug View Post
    I did not develop a good unrar-strategy yet. If nzb-collection contains more than one archive, I need to find this out to unpack all of them, before I delete rars.
    Hellanzb has the same problem with unpacking. SABnzb however does support nzbs with multiple archives. I guess it either sees all the rar's with the same name before the dot as one sub-collection, or it parses all the empty .par2 files to see which files belong the each sub-collection.
    (or a combination of both, if there are multiple archives with only one par2 set)

    If you need some testing on a newer release, let me know.

    Quote Originally Posted by hugbug View Post
    Libpar (and par2cmdline) has no ability to save/reload par-check-state (like quickpar does). If we want to hold the state, we must keep parrepair-object alive (this is what nzbget does during waiting for extra par-blocks).
    Oke, I didn't know that. I thought it would verify -> download extra par2s -> start a repair (beginning again with verify) -> if some of the downloaded par2s are damaged, download some more -> again start a repair (beginning again with verify). In this scenario it would verify the whole nzb 3 times, but because you use libpar2 and keep the object alive, it would only verify the rars once, after that it verifies the downloaded par2s, and starts repairing.

    So if you were to verify the files one at a time and log the result, it would be quicker if all files are oke, but slower if repairing is needed, because than all files need to be verified again, before repairing can commence.

  7. #7
    Currently nzbget has only one parrepair-object. It monitors download queue and when the last file from some nzb-collection is downloaded (except additional par-files, which are paused by default), it starts par-check for this nzb-collection (actually for each par-set of this collection, but typically one nzb-collection contains only one par-set). If after verifying par-checker finds out, that it needs extra pars, it unpauses them and waits (keeping libpar's parrepair-object alive). After all neccessary par-files are downloaded, par-checker verifies them and starts repair. It does not need to verify them again. This is the main difference to hellanzb, ninan and sabnzbd (they use par2cmdline).
    On my WL500gP verifying (just verifying!) of one dvd lasts about one hour. So nzbget should complete download with par-check/repair one hour early than other programs

    Quote Originally Posted by DrChair View Post
    So if you were to verify the files one at a time and log the result, it would be quicker if all files are oke, but slower if repairing is needed, because than all files need to be verified again, before repairing can commence.
    With libpar we cannot verify just one file. Parrepair-object takes par-file as parameter and verifies all files belonging to a par-set it coud find in a directory. Then it can verify extra files we pass to it. That's why we cannot use approach like "create parrepair-object, verify one downloaded file, destroy parrepair-object" or use one parrepair-object for files from different par-sets. We must keep one parrepair-object for each par-set.
    In this case we do not need to start verify again, if our parrepair-object for this nzb-collection (actually for the par-set) was keeped alive. So verify/repair should not be longer in any case.

    The problem occurs only, if files from different nzb-collections (or par-sets from one collection) in download queue are mixed (remember, nzbget allows to control (I mean move) individual files in download queue, not just nzb-collections). If we want to verify a file, which does not belong to existed parrepair-object, we need to create a new parrepair-object or we can cancel verify-process in existed parrepair-object (and lose all verify-info for this par-set).

    But if we suppose, that the download queue is not edited often,we could achieve good results with a limited number of parrepair-objects (we need to be able to configure the max count).

    This also means a lot of coding/testing-work and I'm not sure if it (another one hour advantage for dvdr) worths.

  8. #8

    Question Client/Server

    Hugbug,

    Can you explain the client/server option of nzbget?
    From what I understand, I could have a nzbget-server running on my "download server", and add nzb-files using a nzbget-client running on my pc. Does this mean I can switch of my pc after adding a file, or does nzbget still need the client to communicate?

    Cheers for making a very stable downloadtool!

    Jeroen

  9. #9
    Quote Originally Posted by Jeroen van Omme View Post
    Can you explain the client/server option of nzbget?
    From what I understand, I could have a nzbget-server running on my "download server", and add nzb-files using a nzbget-client running on my pc. Does this mean I can switch of my pc after adding a file, or does nzbget still need the client to communicate?
    You can shutdown client (PC) anytime.

    Few tips:
    1) Create a shortcut with command "nzbget.exe -o outputmode=curses -C" on desktop. If you want to check the state of server start the client via shortcut. Then use Q-key to stop client, this key work only in curses-outputmode, for other outputmodes you can use Ctrl+C or just close the window.
    2) Create a shortcut with command "nzbget.exe -A". To add a file to download queue drag'n'drop the file on the shortcut. You can also put the file into monitoring folder via samba-share or ftp.
    3) To add a file to the top of download queue create and use a shortcut with command "nzbget.exe -A -T".
    4) You can also create new items in explorer's context menu for nzb-files. In this case you need to add "%1" at the end of commands, for example: nzbget.exe -A "%1".
    Last edited by hugbug; 12-12-2007 at 11:52. Reason: added (4)

  10. #10

    Question 550 Error

    Guys,

    I've run into a weird problem. I run nzbget on my WL-HDD. Placing nzb-files and retrieving downloaded file is done via FTP.
    Nzbget has run succesfully for a number of weeks.
    However when I want to retrieve a particular fileset I get an 550 error (Failed to change directory) error when I want to view/copy/delete it. All other downloads can be copied without a problem, just this one has issues.....

    Any suggestions?

    Jeroen

    PS. Happy holidays!

  11. #11
    Quote Originally Posted by Jeroen van Omme View Post
    However when I want to retrieve a particular fileset I get an 550 error (Failed to change directory) error when I want to view/copy/delete it.
    If you can access the directory using telnet, then it is eventually a problem in filename - it may contain characters, ftp-server does not like.
    If you can't access the directory, it should be a problem with filesystem. Try to repair it.

  12. #12
    Quote Originally Posted by hugbug View Post
    If you can access the directory using telnet, then it is eventually a problem in filename - it may contain characters, ftp-server does not like.
    If you can't access the directory, it should be a problem with filesystem. Try to repair it.
    Thanks for the quick reply, hugbug.
    - Even with telnet I can't change to the directory, so it's not limited to ftp.
    - How would I go about fixing my filesystem?

  13. #13
    It depends on filesystem. For me it is simple - I'm using usb-hdd with fat32-filesystem, so I just attach it to pc and check under windows.

    For ext2/ext3 there is a tool - e2fsck. The page suggests to not check mounted filesystems. Is it possible to unmount a disk on WL-HDD? Try to ask for help in WL-HDD subforum.

    But may be you just have no permissions to access the directory (don't know why)? Try to change them with chmod. Even if you logged in as admin (root) you need access rights to enter a directory. As admin (root) you can give the rights to yourself.

  14. #14

    Unhappy No luck

    Hugbug,

    I tried e2fsck, but it didn't solve my problem.
    I also tried rebooting the WL-HDD (thinking maybe some process was locking the dir), but this also didn't work. It's just plain weird: All previous files can be viewed/removed etc, and files downloaded at a later time also do not give any problem.....oh well, for now I'll just leave this dir alone, it's not hogging that much space on the HD......


    Jeroen
    Last edited by Jeroen van Omme; 24-12-2007 at 10:57. Reason: typo

  15. #15
    I recently switched over from hellanzb to nzbget, because I realized it doesn't take longer to unpack the files from my router to my desktop than to just copy it.

    And yesterday I ran into the following situation:
    I was downloading a collection which I got from exporting to nzb in Newsleecher. This collection contains a few broken files and the complete reposted versions of those broken files.
    The result: a broken file.part17.rar and a complete file.part17.rar_duplicate1

    In my opinion, in case of broken files, the duplicates of those files should also be verified, to see if they are better (just like Quickpar does). Or is this also a limitation in libpar2 ??
    If that's the case, perhaps it might be possible to determine the best version of the duplicates before starting to verify, by looking at the filesize? Cause in my situation, all the broken files where smaller than they should be, and the duplicates had the right size.

    One last thing that is bothering me: If I add a lot of nzb's at once, the memory usage gets quite big (about 8 MB per dvd5). But worse, it doesn't seem to free the used memory. I downloaded a collection containing 6 dvd5's -> memory usage is 54 MB. However the last dvd finished repairing 4 days ago, and still the memory usage is 54 MB.
    Tomorrow i emptied the queue (all paused par2 files) but still it uses 54 MB.
    The only thing that works is to stop and start the daemon.

    Could you look into why nzbget doesn't free up memory, when it isn't needed anymore?

    And would it be possible to add an option to specify the number of collections to keep in memory. (cause now when i want to download a lot of nzbs, I keep all the nzb's in another directory, and move them into the nzbget directory 2 at a time).

    Finally, it might be useful the have a command to empty the whole queue (nzbget -E D -I * isn't working)

Page 1 of 9 123 ... LastLast

Similar Threads

  1. How to compile custom binary
    By Elephantik in forum WL-500g Q&A
    Replies: 5
    Last Post: 04-01-2009, 12:14
  2. how to compile binary for Asus native firmware
    By twpang in forum WL-500gP Q&A
    Replies: 0
    Last Post: 31-07-2007, 07:21
  3. Replies: 8
    Last Post: 10-06-2005, 14:24

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •