Results 1 to 15 of 121

Thread: nzbget - binary newsgrabber

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1

    nzbget - binary newsgrabber

    Hi guys,

    I proudly present you the new version of nzbget - 0.3.0.

    NZBGet is a binary newsgrabber, which downloads files from usenet based on information given in nzb-files. NZBGet can be used in standalone and in server/client modes.

    New version has many improvements including fast yEnc-decoder, par-check and repair.
    It runs very well on WL500gP (in contrast to previous version). Moreover this device was the primary development-target for new version.
    You can read more info on nzbget on its web site and sourceforge project page.

    The new version is now available in a package repository. You need to use oleg-branch. A swap-file is also highly recommended.

    To install:
    Code:
    ipkg update
    ipkg install nzbget
    It will install a samle configuration file under /opt/share/doc/nzbget/nzbget.conf.example. Please rename it to .nzbget and put into your home directory. Alternatively the config file can be stored under following locations:
    /etc/nzbget.conf
    /usr/etc/nzbget.conf
    /usr/local/etc/nzbget.conf
    /opt/etc/nzbget.conf
    Choose the one you like the most.

    You need to set at least the option "MAINDIR" and one newsserver in configuration file. Please note, that the MAINDIR must exist, it will not be created automatically.

    For usage description please refer to web site and to help screen "nzbget -h".

    Basically you start a daemon first:
    Code:
    nzbget -D
    To add a new download:
    Code:
    nzbget -A /path/to/you/file.nzb
    To list current download queue:
    Code:
    nzbget -L
    To see the server output (remote mode):
    Code:
    nzbget -C
    or (to directly use curses-output, if it was not selected in configuration file)
    Code:
    nzbget -o outputmode=curses -C
    Please let me know, if you have any problems installing or running nzbget.

    Best Regards,
    hugbug.
    Last edited by hugbug; 30-11-2007 at 09:49. Reason: typo corrected

  2. #2
    Hmmm interesting and thanks, I didnt spot that this was out.

    I might try this on my Linkstation (I don't have the WL500g but this is a good forum and similar to NAS-Central in getting ideas for things to try)

    Is there any way to remotely administer or monitor it (web interface) ??

    Have you compared the CPU and memory usage with HellaNZB?

    Does it have any dependencies (Hella needs Python etc)?

    Do you use the faster yEnc module like Hella does?

    Is this compiled for ARM or MIPSEL (Linkstation is ARM)

    I'd certainly try it....

  3. #3
    methanoid
    Is there any way to remotely administer or monitor it (web interface) ??
    The server can be administered remotely. The built-in console-client allows to monitor of log-messages, current state of queue, current download speed. It can also edit the download queue (pause/unpause, move, delete items). There are three output modes, the most advanced is ncurses (use "nzbget -o outputmode=curses -C" to start a client in this mode), on Windows it looks similar to Far or Midnight Commander (console interface).
    The server and client is the same executable, where the running mode depends on passed command-line parameters. To add a new file to download queue a separate call needed (nzbget -A filname.nzb) or you just can put a nzb-file into nzbget's monitoring folder.
    NZBGet (server and client) can be compiled and run on many platforms, including Linux and Windows.
    You can download windows-version from NZBGet-site. Then you can use windows-version to monitor downloads on your Linux-server.

    Note: after version 0.3.0 was published, I found, that I did not pay an attention to the fact, that endianness affect data transmitted over network. The clients and server of version 0.3.0 are compatible only if they use the same endianess. Since x86 and mipsell are both little endian, it works. ARM-processors can work in both modes (little and big endian), depending on OS used.
    This compatibility issue was addressed and already fixed in source code (svn-repository on sourceforge.net), but a new release (probably 0.3.1) is not published yet.

    Have you compared the CPU and memory usage with HellaNZB?
    NZBGet uses very little CPU power. On my WL500gP with 2 MBit/s connection the top-command says something like 60-80% cpu idle time. Mem-usage is not so low as it could be, because the program maintains the complete list of articles in download queue in memory. The typical usage of swap is not higher as 10 MB.
    I did not used Hellanzb heavily and therefore can not make a good comparison. I just can say that in my quick tests the cpu idle time reported by top was about 5%.

    Does it have any dependencies (Hella needs Python etc)?
    NZBGet is written in C++. It needs following packages (for oleg-branch of optware): ncurses, libxml2, libpar2, libsigc++, libstdc++, zlib.

    Do you use the faster yEnc module like Hella does?
    NZBGet has a very fast internal decoder for yEnc.

    Is this compiled for ARM or MIPSEL (Linkstation is ARM)
    The optware-repository from nslu2-linux.org has nzbget compiled for many targets. Which one do you use, what your /opt/etc/ipkg.conf says?
    Last edited by hugbug; 11-12-2007 at 18:45. Reason: There are FREE output modes -> THREE

  4. #4
    I've also used nzbget to download a few nzb's and it does it's job nicely.

    However I do miss the automatic unrarring.

    And because hellanzb still maxes out my connection (450 kbyte/s), I'll stick to that for a while.

    A few suggestions which would be nice in a future version:
    - automatic unpack
    - par2 verify 'on the fly' so that each .rar gets checked right after it has been downloaded (like newsleacher), instead of checking all the rars when the whole nzb is downloaded. This shoud speed up the proces.

  5. #5
    automatic unpack
    I did not develop a good unrar-strategy yet. If nzb-collection contains more than one archive, I need to find this out to unpack all of them, before I delete rars.
    However nzbget can execute a user-defined post-process script after download of nzb-collection is completed. The unraring could be done in this script.

    par2 verify 'on the fly' so that each .rar gets checked right after it has been downloaded (like newsleacher), instead of checking all the rars when the whole nzb is downloaded. This shoud speed up the proces.
    Libpar (and par2cmdline) has no ability to save/reload par-check-state (like quickpar does). If we want to hold the state, we must keep parrepair-object alive (this is what nzbget does during waiting for extra par-blocks). If the download queue was rearranged (for example if new download was added to the top of queue), we need either to create another parrepair-object for a new nzb-collection (and allow the first object to keep used memory) or stop the first object and lost par-check-state.
    This makes the "on the fly"-thing not easy, but if we make it adjustable (something like an option for "max keeped objects") it might work.

    I will keep both features on a wish-list for next releases.

  6. #6
    Quote Originally Posted by hugbug View Post
    I did not develop a good unrar-strategy yet. If nzb-collection contains more than one archive, I need to find this out to unpack all of them, before I delete rars.
    Hellanzb has the same problem with unpacking. SABnzb however does support nzbs with multiple archives. I guess it either sees all the rar's with the same name before the dot as one sub-collection, or it parses all the empty .par2 files to see which files belong the each sub-collection.
    (or a combination of both, if there are multiple archives with only one par2 set)

    If you need some testing on a newer release, let me know.

    Quote Originally Posted by hugbug View Post
    Libpar (and par2cmdline) has no ability to save/reload par-check-state (like quickpar does). If we want to hold the state, we must keep parrepair-object alive (this is what nzbget does during waiting for extra par-blocks).
    Oke, I didn't know that. I thought it would verify -> download extra par2s -> start a repair (beginning again with verify) -> if some of the downloaded par2s are damaged, download some more -> again start a repair (beginning again with verify). In this scenario it would verify the whole nzb 3 times, but because you use libpar2 and keep the object alive, it would only verify the rars once, after that it verifies the downloaded par2s, and starts repairing.

    So if you were to verify the files one at a time and log the result, it would be quicker if all files are oke, but slower if repairing is needed, because than all files need to be verified again, before repairing can commence.

  7. #7
    Neither on my NSLU2 nor on my PC via Windows-Client.
    I can't make a connection.

    On NSLU2:

    In the first putty-session I have the server running:

    # nzbget -s
    [INFO] nzbget server-mode
    3 threads running, 0 KB/s, 0.00 MB remaining Limit 48 KB/S


    In a second putty-session (or a second screen-window) I try to connect:

    # nzbget -C
    Segmentation fault

  8. #8
    chief12345
    We need to build the program in debug-mode to get debug messages. Do you have build environment to compile the program? If not, I'll try to do this. Which feed are you using, is it http://ipkg.nslu2-linux.org/feeds/optware/nslu2/cross/stable/ ?

  9. #9
    No, I don't have a compile environment.

    cross-feed.conf in /etc/ipkg says:

    http://ipkg.nslu2-linux.org/feeds/op...2/cross/stable

    I'd realy appreciate your help.

  10. #10
    I'll let you know as soon as I have working build environment.

    Have you tried other client commands?
    nzbget -L
    nzbget -G 100
    nzbget -A /path/to/filename.nzb
    nzbget -P
    nzbget -U
    nzbget -Q

    None works?

    Also try to put a nzb-file to income-directory ($MAINDIR/nzb), but you need to create the directory before you start the server. The program should find the file (after about one minute) and start downloading.

  11. #11
    Hi hugbug,


    Other client commands don't work either.
    At least I tried
    nzbget -L
    nzbget -A /path/to/filename.nzb
    nzbget -Q

    None of them are working.

    Putting a nzb-file into $MAINDIR/nzb works. After a few minutes the server starts to download.

Similar Threads

  1. How to compile custom binary
    By Elephantik in forum WL-500g Q&A
    Replies: 5
    Last Post: 04-01-2009, 11:14
  2. how to compile binary for Asus native firmware
    By twpang in forum WL-500gP Q&A
    Replies: 0
    Last Post: 31-07-2007, 06:21
  3. Replies: 8
    Last Post: 10-06-2005, 13:24

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •