前言

我们先安装好我们要用的工具吧

AWVS 安装

我们下载完之后直接安装

接下来基本上就是下一步下一步

AWVS 设置账号密码

我们随便设置一个账号密码吧

AWVS 设置管理页面端口

基本上我们就默认就好

如果我们需要在网络上访问管理页面的话
我们需要勾选一下Allow remote access to Acunetix

添加 AWVS 证书

为了正常访问某些网站
我们需要安装 AWVS 自签名的证书

安装完成

AWVS 激活

使用管理员权限用开心工具进行激活

设置中文

User --> Profile --> Language【简体】 --> Save

安装 Xray

Xray 的使用方法比较简单
只需要我们下载好已经开心的包
还有系统安装好Python3的环境
【建议使用 Python 3.10

Xray --help

我们来看一下Xray能干点什么吧

1
xary.exe --help

AWVS Xray 联动

Xray 开启监听

我们先使用Xray进行监听

1
.\xray.exe webscan --listen 127.0.0.1:7777 --html-output test.html

安装 Xray CA证书

当我们启动第一次Xray
它会自动生成CA证书在文件目录

然后我们手动点击ca.crt添加一下证书

  1. 安装证书
  1. 安装到当前用户即可
  1. 根据证书类型,自动选择证书存储
  1. 完成

成功之后会出现导入成功的提示

编写 AWVS Xray 联动脚本

我在网上借ChaoXi了一个脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
import requests
import json
import urllib3.packages
#import requests.packges
from urllib3.exceptions import InsecureRequestWarning
#requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
urllib3.disable_warnings(InsecureRequestWarning)

apikey = '**************************************************************'#API
headers = {'Content-Type': 'application/json',"X-Auth": apikey}

def addTask(url,target):
try:
url = ''.join((url, '/api/v1/targets/add'))
data = {"targets":[{"address": target,"description":""}],"groups":[]}
r = requests.post(url, headers=headers, data=json.dumps(data), timeout=30, verify=False)
result = json.loads(r.content.decode())
return result['targets'][0]['target_id']
except Exception as e:
return e
def scan(url,target,Crawl,user_agent,profile_id,proxy_address,proxy_port):
scanUrl = ''.join((url, '/api/v1/scans'))
target_id = addTask(url,target)

if target_id:
data = {"target_id": target_id, "profile_id": profile_id, "incremental": False, "schedule": {"disable": False, "start_date": None, "time_sensitive": False}}
try:
configuration(url,target_id,proxy_address,proxy_port,Crawl,user_agent)
response = requests.post(scanUrl, data=json.dumps(data), headers=headers, timeout=30, verify=False)
result = json.loads(response.content)
return result['target_id']
except Exception as e:
print(e)

def configuration(url,target_id,proxy_address,proxy_port,Crawl,user_agent):
configuration_url = ''.join((url,'/api/v1/targets/{0}/configuration'.format(target_id)))
# Scan_Speed【扫描速度】:sequential【单线程】,slow【缓慢】,moderate【适度】,fast【快速】
data = {"scan_speed":"sequential","login":{"kind":"none"},"ssh_credentials":{"kind":"none"},"sensor": False,"user_agent": user_agent,"case_sensitive":"auto","limit_crawler_scope": True,"excluded_paths":[],"authentication":{"enabled": False},"proxy":{"enabled": Crawl,"protocol":"http","address":proxy_address,"port":proxy_port},"technologies":[],"custom_headers":[],"custom_cookies":[],"debug":False,"client_certificate_password":"","issue_tracker_id":"","excluded_hours_id":""}
r = requests.patch(url=configuration_url,data=json.dumps(data), headers=headers, timeout=30, verify=False)
def main():
Crawl = True
proxy_address = '127.0.0.1'
proxy_port = '7777'
awvs_url = 'https://127.0.0.1:3443' #awvs url
with open(r'C:\Users\Administrator\Desktop\ScanURL.txt','r',encoding='utf-8') as f:
targets = f.readlines()
profile_id = "11111111-1111-1111-1111-111111111111"
#扫描百度爬虫UA头
#user_agent = "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)"
#扫描默认UA头
user_agent = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.21 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.21"
if Crawl:
profile_id = "11111111-1111-1111-1111-111111111117"

for target in targets:
target = target.strip()
if scan(awvs_url,target,Crawl,user_agent,profile_id,proxy_address,int(proxy_port)):
print("{0} 添加成功".format(target))

if __name__ == '__main__':
main()

Python 脚本依赖安装 requests

1
python3 -m pip install requests

联动成功

脚本导入网站域名进行自动化扫描

参考 & 引用

https://zhuanlan.zhihu.com/p/368964281
https://blog.csdn.net/I_like_ctrl/article/details/126435285