Results 1 to 15 of 121

Thread: nzbget - binary newsgrabber

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    It depends on filesystem. For me it is simple - I'm using usb-hdd with fat32-filesystem, so I just attach it to pc and check under windows.

    For ext2/ext3 there is a tool - e2fsck. The page suggests to not check mounted filesystems. Is it possible to unmount a disk on WL-HDD? Try to ask for help in WL-HDD subforum.

    But may be you just have no permissions to access the directory (don't know why)? Try to change them with chmod. Even if you logged in as admin (root) you need access rights to enter a directory. As admin (root) you can give the rights to yourself.

  2. #2

    Unhappy No luck

    Hugbug,

    I tried e2fsck, but it didn't solve my problem.
    I also tried rebooting the WL-HDD (thinking maybe some process was locking the dir), but this also didn't work. It's just plain weird: All previous files can be viewed/removed etc, and files downloaded at a later time also do not give any problem.....oh well, for now I'll just leave this dir alone, it's not hogging that much space on the HD......


    Jeroen
    Last edited by Jeroen van Omme; 24-12-2007 at 09:57. Reason: typo

  3. #3
    I recently switched over from hellanzb to nzbget, because I realized it doesn't take longer to unpack the files from my router to my desktop than to just copy it.

    And yesterday I ran into the following situation:
    I was downloading a collection which I got from exporting to nzb in Newsleecher. This collection contains a few broken files and the complete reposted versions of those broken files.
    The result: a broken file.part17.rar and a complete file.part17.rar_duplicate1

    In my opinion, in case of broken files, the duplicates of those files should also be verified, to see if they are better (just like Quickpar does). Or is this also a limitation in libpar2 ??
    If that's the case, perhaps it might be possible to determine the best version of the duplicates before starting to verify, by looking at the filesize? Cause in my situation, all the broken files where smaller than they should be, and the duplicates had the right size.

    One last thing that is bothering me: If I add a lot of nzb's at once, the memory usage gets quite big (about 8 MB per dvd5). But worse, it doesn't seem to free the used memory. I downloaded a collection containing 6 dvd5's -> memory usage is 54 MB. However the last dvd finished repairing 4 days ago, and still the memory usage is 54 MB.
    Tomorrow i emptied the queue (all paused par2 files) but still it uses 54 MB.
    The only thing that works is to stop and start the daemon.

    Could you look into why nzbget doesn't free up memory, when it isn't needed anymore?

    And would it be possible to add an option to specify the number of collections to keep in memory. (cause now when i want to download a lot of nzbs, I keep all the nzb's in another directory, and move them into the nzbget directory 2 at a time).

    Finally, it might be useful the have a command to empty the whole queue (nzbget -E D -I * isn't working)

  4. #4
    Hi DrChair,

    Quote Originally Posted by DrChair View Post
    In my opinion, in case of broken files, the duplicates of those files should also be verified, to see if they are better (just like Quickpar does). Or is this also a limitation in libpar2 ??
    Normally libpar2 (like par2cmdline) scans only files, listed in par2-file. But it is possible to load additional files (via special function) after the scan is completed. I'll see what I can do in your case. I will need to parse filenames to not load all files in a directory (this costs time), but only duplicate-files. Could you please send me your nzb-file for tests.

    One last thing that is bothering me: If I add a lot of nzb's at once, the memory usage gets quite big (about 8 MB per dvd5).
    It is strange, it should free the memory. Anyway I have already addressed this issue in a current (development) version. Now nzbget keeps articles' info on disk and loads it just before the download of a file starts. In my test I have added few big-nzbs (10-15GB data in each, about 60 GB all) and the memory consumption was as little as 2-3 MB for the whole program (Virt.mem usage reported by "top") and the usage of swap-file was 4 MB.

    Finally, it might be useful the have a command to empty the whole queue (nzbget -E D -I * isn't working)
    In current (development) version I have implemented new commands for group-editing (delete/move/pause/unpause all files belonging to the same nzb) with one simple command: e.g. "nzbget -E G D 1" deletes all files from the nzb-group, which the file with ID=1 belongs. You can do this also via curses-frontend (use G-key to switch group-view on/off).

    If you have compile environment for optware, you can get the mentioned version of nzbget from svn using command:
    Code:
    svn co --revision 52 http://nzbget.svn.sourceforge.net/svnroot/nzbget/trunk nzbget-0.3.1-testing
    Then run "configure" and "make dist", copy created file "nzbget-0.3.1-testing.tar.gz" to "optware/downloads", edit version number in "optware/make/nzbget.mk" to "NZBGET_VERSION=0.3.1-testing" and compile the package.

    Or I can send you the compiled ipk-package instead
    Let me know if you want to test this version.
    ChangLog for revision 28, not all new features listed, see svn change log for changes after revision 28.
    Note: the communication protocol was changed in version 0.3.1, you will not be able to use windows-version 0.3.0 with server 0.3.1. I could send you the updated compiled windows-version too.
    I'm going to release the version 0.3.1 in a few next weeks.

  5. #5
    nevermind the memory issues...

    I just found the svn repository and compiled 0.3.1+r52
    The result: still using only 10 MB with 9 nzbs in queue

    Just a cosmetic tip: add a \n at line 721 (and perhaps line 722) in Options.cpp

    And I just found a nzb that gets parsed wrong. When in queue, it displays only a part of the subject, instead of the filenames. Also it can't see the difference between rar's and par's, so it doesn't pause the par's.

    I'll pm you that nzb.

  6. #6
    Hi all,

    I installed and configured nzbget on my Synology DS-106 and I'm having the following error message when trying to launch it (in server or client mode) :

    DiskStation> nzbget -s
    FATAL ERROR: Invalid option "DaemonUserName"

    I tried to change the DaemonUserName in the configuration file
    DaemonUserName=root
    with "admin" or any user name from my Syno but it doesn't change anything.

    I'm sure I'm missing something simple...

    Can you help ?

    Thanks.

  7. #7
    Quote Originally Posted by norberto View Post
    DiskStation> nzbget -s
    FATAL ERROR: Invalid option "DaemonUserName"
    Looks like you are trying to use a configuration file from a different (newer) program version, as installed.
    You probably installed the program from a package-repository (version 0.3.0), but took the configuration file from svn/trunk (current development version).

    You should have an appropriate configuration file example installed in "/opt/share/doc/nzbget/nzbget.conf.exampe" (it comes with optware-package). It can be also downloaded from svn/tags/0.3.0.

    Or you can just comment out (with "#"-character) all (new) options the program complains about.

  8. #8
    You're right : I followed the instructions found on the Synology forum and downloaded a wrong configuration file.

    It seems it's working fine now.

    Thanks a lot for your prompt answer !

  9. #9
    Hi all,

    Thanks for this topic, NZBGet is a GREAT alternative to sabnzbd.
    Currently I'm using the following post-processing script:

    Code:
    #!/bin/sh
     
    #  1 - path to destination dir, where downloaded files are located;
    #  2 - name of nzb-file processed;
    #  3 - name of par-file processed (if par-checked) or empty string (if not);
    #  4 - result of par-check:
    #      0 - not checked: par-check disabled or nzb-file does not contain any
    #          par-files;
    #      1 - checked and failed to repair;
    #      2 - checked and sucessfully repaired;
    #      3 - checked and can be repaired but repair is disabled;
    #  5 - state of nzb-job:
    #      0 - there are more collections in this nzb-file queued;
    #      1 - this was the last collection in nzb-file;
    #  6 - indication of failed par-jobs for current nzb-file:
    #      0 - no failed par-jobs;
    #      1 - current par-job or any of the previous par-jobs for the
    #          same nzb-files failed;
     
    Sendmail=False
    from=MyBook@isp.com
    sendto=ReMiA@isp.com
    server=mail.isp.com
    nzbdir=/tmp/harddisk/torrent/target/
     
    UNRAR=True
    declare -a RARFILES
    dir=$1
    dest=/tmp/harddisk/torrent/target/
    PauseServer=True
     
    if [ "$5" = "1" ]
    then
          chmod -R 766 "$1"
          subject="Download of $1 completed"
          if [ "$4" = "0" ] ; then
                echo "PAR2 not checked: par-check disabled or nzb-file does not contain any par-files" > "$nzbdir/message.txt"
          elif [ "$4" = "1" ] ; then
                echo "PAR2 checked and failed to repair" > "$nzbdir/message.txt"
          elif [ "$4" = "2" ] ; then
                echo "PAR2 checked and sucessfully repaired" > "$nzbdir/message.txt"
          elif [ "$4" = "3" ] ; then
                echo "PAR2 checked and can be repaired but repair is disabled" > "$nzbdir/message.txt"
          elif [ "$4" = "" ] ; then
                echo "No PAR2 returncode" > "$nzbdir/message.txt"
          fi
          if [ -f "$1/_brokenlog.txt" ] ; then
                    echo  >> "$nzbdir/message.txt"
                    echo "Broken Files:" >> "$nzbdir/message.txt"
                cat "$1/_brokenlog.txt" >> "$nzbdir/message.txt"
          fi
    fi
     
    if [ $UNRAR = "True" ] ; then
    # PAUSE Server
            if [ $PauseServer = "True" ] ; then
                    /opt/bin/nzbget -P
            fi
            cd "$dir"
            RARFILES=(`ls | grep -E [.][pP][Aa][Rr][Tt][0]*[1][.][Rr][Aa][Rr] | tr ' ' '§'`)
            number_rar=${#RARFILES[*]}
            if [ $number_rar -ge 1 ] ; then
                    for b in "${RARFILES[@]}"
                    do
                            rar=${b//§/ }
                            f=${rar/[.][pP][Aa][Rr][Tt][01]*[.][Rr][Aa][Rr]/}
                            destdir=$dest$f
                            mkdir "$destdir"
                            unrar x -y -o- -p- "$rar" "$destdir"
                            echo "UNRARRED $rar to $destdir" >> "$nzbdir/message.txt"
                    done
            else
                    RARFILES=(`ls | grep -E [.][Rr][Aa][Rr] | tr ' ' '§'`)
                    number_rar=${#RARFILES[*]}
                    if [ $number_rar -ge 1 ] ; then
                    for b in "${RARFILES[@]}"
                    do
                            rar=${b//§/ }
                            f=${rar/[.][Rr][Aa][Rr]/}
                            destdir=$dest$f
                            mkdir "$destdir"
                            unrar x -y -o- -p- "$rar" "$destdir"
                            echo "UNRARRED $rar to $destdir" >> "$nzbdir/message.txt"
                    done
                    fi
            chmod -R 766 "$destdir"
            fi
     
            # UN-Pause server
            if [ $PauseServer = "True" ] ; then
            /opt/bin/nzbget -U
            fi
    fi
     
    if [ $Sendmail = "True" ] ; then
          sendEmail -t $sendto -f $from -u $subject -s $server -o message-file="$nzbdir/message.txt" -q
    fi
    exit 0
    But there's an issue with the 'declare' in rule 27... what can I do about this? Or if anyone has a working post-process script to unrar and clean up files, post it here, it seems there are more people interested!

  10. #10
    The following script is used on popcornhour box.
    It was not designed to be easily portable, so you need to change the absolute paths.
    Code:
    #!/bin/sh 
    DownloadDir="$1"   
    NzbFile="$2" 
    ParCheck=$4 
    NzbState=$5  
    ParFail=$6 
    
    # Make a logfile
    log="$DownloadDir"/unrar.log
    export PATH=$PATH:/opt/sybhttpd/localhost.drives/HARD_DISK/Download
    
    # Check if all is downloaded and repaired
    if [ "$NzbState" -eq 1 -a "$ParCheck" -eq 2 -a "$ParFail" -eq 0 ] 
    then 
       cd "$DownloadDir" 
    
    # Make a temporary directory to store the unrarred files
       mkdir extracted
       
    # Remove the Par files
       rm *.[pP][aA][rR]2 
       
    # Pause NZBGet download until script finishes
       /mnt/syb8634/bin/nzbget -P -c /opt/sybhttpd/localhost.drives/HARD_DISK/.nzbget/nzbget.conf
       
    # Unrar the files (if any) to the temporary directory, if there are no rar files this will do nothing
          if (ls *.rar >/dev/null)
       then
       /mnt/syb8634/bin/unrar x -y -p- -o+ "*.rar"  ./extracted/ 
       fi
       
    # Remove the rar files (Temporaly disabled just remove # to activate it)
    #   rm *.r[0-9][0-9]
    #   rm *.rar
    #   rm *.s[0-9][0-9] 
       
    # Go to the temp directory and try to unrar again.  
    # If there are any rars inside the extracted rars then these will no also be unrarred
       cd extracted
       if (ls *.rar >/dev/null)
       then
       /mnt/syb8634/bin/unrar x -y -p- -o+ "*.rar"
       fi
       
    # Delete the Rar files (Temporaly disabled just remove # to activate it)
    #   rm *.r[0-9][0-9]
    #   rm *.rar
    #   rm *.s[0-9][0-9] 
       
    # Move everything back to the Download folder
       mv * ..
       cd ..
       
    # Clean up the temp folder
       rmdir extracted
       chmod -R a+rw . 
       rm *.nzb
       rm *.1
    
    # Rename img file to iso so the NMT can read it
    # It will be renamed to .img.iso so you can see that it has been renamed
       if (ls *.img >/dev/null)
       then
       imgname=`find . -name "*.img" |awk -F/ '{print $NF}'`
       mv $imgname $imgname.iso
       fi   
       
       
    # Unpause NZBGet
       /mnt/syb8634/bin/nzbget -U -c /opt/sybhttpd/localhost.drives/HARD_DISK/.nzbget/nzbget.conf
    
    fi  
    #################
    My observations:
    - the command "export PATH=$PATH:/opt/sybhttpd/localhost.drives/HARD_DISK/Download" is probably not needed;
    - the commands to pause/unpause are optional, you may comment them out to allow nzbget download files during unraring.

    That script actually works, I've tested it on popcornhour. It should work on asus as well.

    The script was written by Philos and Werner from pocornhour forum. See that thread.

  11. #11
    I understand, I know that it should work, but my log says:

    Thu Jul 31 18:26:07 2008 DETAIL Post-Process: /tmp/harddisk/torrent//.nzb/scripts/myscript.sh: /tmp/harddisk/torrent//.nzb/scripts/myscript.sh: 27: declare: not found
    Thu Jul 31 18:26:07 2008 DETAIL Post-Process: /tmp/harddisk/torrent//.nzb/scripts/myscript.sh: /tmp/harddisk/torrent//.nzb/scripts/myscript.sh: 60: Syntax error: "(" unexpected (expecting "fi")

  12. #12
    OK, I'll test it on asus.

Similar Threads

  1. How to compile custom binary
    By Elephantik in forum WL-500g Q&A
    Replies: 5
    Last Post: 04-01-2009, 11:14
  2. how to compile binary for Asus native firmware
    By twpang in forum WL-500gP Q&A
    Replies: 0
    Last Post: 31-07-2007, 06:21
  3. Replies: 8
    Last Post: 10-06-2005, 13:24

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •