Взлом и проверка на уязвимости

HTTrack — копирование сайтов для автономного просмотра

HTTrack — это бесплатная и простая в использовании утилита для автономного просмотра сайтов с помощью их копирования и сохранения.

HTTrack — копирование сайтов для автономного просмотра

HTTrack — скачать сайт

HTTrack позволяет загружать веб-сайт из интернета в локальный каталог, рекурсивно строить каталоги, получать все HTML файлы, изображения и другие файлы с сервера на компьютер.

HTTrack организует относительную структуру ссылок исходного сайта, которая позволяет вам просто открыть страницу “зеркального” сайта в вашем браузере, и вы можете просматривать сайт в офлайн режиме. HTTrack также может обновить существующий зеркальный сайт и возобновить прерванные загрузки.

HTTrack полностью настраивается и имеет интегрированную справочную систему.

WinHTTrack является выпуском HTTrack для Windows(от Windows 2000 до Windows 10 и выше), а WebHTTrack — выпуском Linux/Unix/BSD.

Скачать сайт с помощью HTTrack

Он имеет широкие возможности, как показано ниже:

> > httrack -- help
 HTTrack version 3.03BETAo4 ( compiled   Jul1 2001)
	usage:  ./httrack ] [-]
	with options listed below:  ( *  is the default value)
General options:
  O  path for mirror/logfiles+cache (-O path_mirror[,path_cache_and_logfiles]) (--path )
 %O  top path if path defined (-O path_mirror[,path_cache_and_logfiles])
Action options:
  w *mirror web sites ( -- mirror)
  W  mirror web sites, - (asks questions) (--mirror-wizard)
  g  just get files (saved in current directory) (--get-files)
  i  continue an interrupted mirror using the cache
  Y   mirror ALL links located in the first level  (mirror links) (--mirrorlinks)
Proxy options:
    use -P proxy:port or -P user:pass@proxy:port) (--proxy )
 % f *use proxy for ftp (f0 don't use) (--httpproxy-ftp[=N])
Limits options:
 rN set the mirror depth to N (* r9999) (--depth[=N])
 %eN set the external links depth to N (* %e0) (--ext-depth[=N])
 mN maximum file length for a non-html file (--max-files[=N])
 mN,N'                  for non html (N)  and html ( N')
 MN maximum overall size that can be uploaded/scanned (--max-size[=N])
 EN maximum mirror time in seconds (60=1 minute, 3600=1 hour) (--max-time[=N])
 AN maximum transfer rate in bytes/seconds (1000=1kb/s max) (--max-rate[=N])
 %cN maximum number of connections/seconds (*%c10)
 GN pause transfer if N bytes reached, and wait until lock file is deleted (--max-pause[=N])
Flow control:
 cN number of multiple connections (*c8) (--sockets[=N])
 TN timeout, number of seconds after a non-responding link is shutdown (--timeout)
 RN number of retries, in case of timeout or non-fatal errors (*R1) (--retries[=N])
 JN traffic jam control, minimum transfert rate (bytes/seconds) tolerated for a link (--min-rate[=N])
 HN host is abandonned if: 0=never, 1=timeout, 2=slow, 3=timeout or slow (--host-control[=N])
Links options:
 %P *extended parsing, attempt to parse all links, even in unknown tags or Javascript (%P0 don' use) (--extended-parsing[=N])
  n  get non-html files 'near' html file (ex: image located outside) (--near)
  t  test all URLs (even forbidden )  ( -- test)
 %L )
Build options:
  type (0 *original structure, 1+: see below) (--structure[=N])
     or user defined structure ( - N "%h%p/%n%q.%t")
  LN long names (L1 *long names / L0 8-3 ) (--long-names[=N])
  KN keep original links ( e.g .http: //www.adr/link) (K0 *relative link, K absolute links, K3 absolute URI links) (--keep-links[=N])
  x  replace external html links by error pages ( -- replace-external)
 %x   not include password for external protected websites (%x0 include) (--no-passwords)
 %q *include  for local files (useless, for information purpose ) (%q0 don't include) (--include-query-string)
 o *generate output html file in case of error (404..) (o0 don' tgenerate)  (-- generate-errors)
  X *purge old files after update (X0 keep delete) (--purge-old[=N])
Spider options:
  bN accept cookies in cookies.txt ( 0 =  donot accept, *  1 = accept)  (--cookies[ = N])
  u  check document type if unknown ( cgi, asp..)  (u0 don't check, * u1 check but /, u2 check always) (--check-type[=N])
 j *parse Java Classes (j0 don't parse)  ( -- parse-java[ = N])
  sN follow robots.txt and meta robots  (0 = never, 1 = sometimes, *  2 = always)  (-- robots[ = N])
 %h  force HTTP/1.0 requests (reduce features, only for old servers or proxies) (--http-10)
 % B  tolerant requests ( accept bogus responses on some servers,  but not standard!)  (--tolerant)
 % s  update hacks:  various hacks to limit re-transfers when updating (identical size,  bogus response..)  (-- updatehack)
 % A  assume that  atype ( cgi, asp..)   isalways linked with  amime type ( - % A php3= text/ html)  (-- assume )
Browser ID:
  F  user-agent  (-F "user-agent name") (--user-agent )
    footer  in Html code (-%F "Mirrored [from host %s [file %s [at %s]]]" (--footer )
 % l  preffered language (- % l "fr, en, jp, *" (-- language )
Log,  index,  cache
  C  create/use  cache for updates and retries (C0 no cache,C1 cache is prioritary,* C2 test update before) (--cache[=N])
  k  store all files in cache (not useful if files on disk) (--store-all-in-cache)
 %n   not download locally erased files (--do-not-recatch)
 % v  display on screen filenames downloaded (in realtime) ( -- display)
  Q  no log - quiet mode ( -- do-not-log)
  q  no questions - quiet mode ( -- quiet)
  z  log - extra infos ( -- extra-log)
  Z  log - debug ( -- debug-log)
  v  log on screen (-- verbose)
  f *log in files ( -- file-log)
  f2 one single log file (-- single-)
  I *make an index (I0 don't make) (--index)
 %I make an searchable index for this mirror (* %I0 don' tmake)  (-- search-index)
Expert options:
  pN priority mode:  (* p3)  (-- priority[ = N])
      0  just scan,  don't save anything (for checking links)
 1 save only html files
 2 save only non html files
 *3 save all files
 7 get html files before, then treat other files
 S stay on the same directory
 D *can only go down into subdirs
 U can only go to upper directories
 B can both go up&down into the directory structure
 a *stay on the same address
 d stay on the same principal domain
 l stay on the same TLD (eg: .com)
 e go everywhere on the web
 %H debug HTTP headers in logfile (--debug-headers)
Guru options: (do NOT use)
 #0 Filter test (-#0 '* .gif'' www.bar.com/foo.gif')
 #f Always flush log files
 #FN Maximum number of filters
 #h Version info
 #K Scan stdin (debug)
 #L Maximum number of links (-#L1000000)
 #p Display ugly progress information
 #P Catch URL
 #R Old FTP routines (debug)
 #T Generate transfer ops. log every minutes
 #u Wait time
 #Z Generate transfer rate statictics every minutes
 #! Execute a shell command (-#! "echo hello")
Command-line specific options:
 V execute system command after each files ($0 is the filename: -V "rm \$0") (--userdef-cmd )
 %U run the engine with another id when called as root (-%U smith) (--user )
Details: Option N
 N0 Site-structure (default)
 N1 HTML in web/, images/other files in web/images/
 N2 HTML in web/HTML, images/other in web/images
 N3 HTML in web/, images/other in web/
 N4 HTML in web/, images/other in web/xxx, where xxx is the file extension
(all gif will be placed onto web/gif, for example)
 N5 Images/other in web/xxx and HTML in web/HTML
 N99 All files in web/, with random names (gadget !)
 N100 Site-structure, without www.domain.xxx/
 N101 Identical to N1 exept that "web" is replaced by the site' 
  N102 Identical  toN2 exept that "web"  isreplaced by the site
 N103 Identical to N3 exept that "web" is replaced by the site' 
  N104 Identical  toN4 exept that "web"  isreplaced by the site
 N105 Identical to N5 exept that "web" is replaced by the site' 
  N199 Identical  toN99 exept that "web"  isreplaced by the site 
  N1001 Identical  toN1 exept that there  isno "web" 
  N1002 Identical  toN2 exept that there  isno "web" 
  N1003 Identical  N3 exept that there  no "web"  (option set for  option)
  N1004 Identical  toN4 exept that there  isno "web" 
  N1005 Identical  toN5 exept that there  isno "web" 
  N1099 Identical  toN99 exept that there  isno "web" 
Details:  User-defined option N
  %n Name file without type (ex: image) (--do-not-recatch)
  % N Name of file,  including file type (ex:  image.gif)
    tFile  ( ex:  gif)
  % p Path [ without ending / ]  ( ex:  / someimages)
  % h Host name ( ex:  www.someweb.com)  (-- http-10)
  % M URL MD5 ( 128  bits,  32  ascii bytes)
  % Q query string MD5 ( 128  bits,  32  ascii bytes)
  %q small  MD5 (16 bits, 4 bytes) (--include-query-)
     %s?  Shortname version (ex:  % sN)
  % [param]  param variable in query string
-- mirror      

Существует большое руководство по объяснению всех вариантов для загрузки веб-сайтов, которое мы можете почитать здесь:

– Руководство Пользователя Httrack (3.10).

Скачать HTTrack на ПК

Ссылка на скачивание httrack_x64.exe.

Related Articles

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *

Back to top button
. 106 запросов. 0,100 секунд.