本文将带你解析各种形式自定义字体,绘制点阵图,并通过图像识别提取出关系列表,最终校对后构建正确的对应关系,最终获取到正确的数据。看到本文,相信以后你对任何形式额字体反爬都能见招拆招。深度剖析自定义字体解析自定义字体的介绍首先,我们必须要清楚自定义字体与普通字体的区别,自定义字体定义了一些特殊的Unicode编码对应的点阵图数据,而普通字体只是定义标准编码的显示形式,所以普通字体渲染的数据可以直接复制出正确的文本,而自定义字体只能复制到对应的Unicode编码。那么游览器如何显示出对应的字符呢?那是因为游览器会根据自定义字体的对应关系,渲染对应的点阵图进行显示。下面我们以某团购网站为例进行演示。这次我分析的页面是深圳休闲娱乐:可以看到自定义字体都存在于svgmtsi标签中,不同的class属性也对应了不同自定义字体文件。如果我们取消所有的自定义字体的加载,可以看到网页上对应的位置都会出现乱码:从上图也可以看到,产生自定义字体的位置完全是随机的。对于这种情况,我们最好使用可以修改HTML DOM树的库来维持节点的相对顺序,我选择了BeautifulSoup这个库,可惜只支持css选择器。不过也好,早期我学编程用Java玩小爬虫的时候就更喜欢css选择器,正好可以找回久违的感觉。接下来我们一步步分析页面,首先用python读取页面数据:Python加载页面import requestsheaders = {“Connection”: “keep-alive”,”Cache-Control”: “max-age=0″,”Upgrade-Insecure-Requests”: “1”,”User-Agent”: “Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36″,”Accept”: “text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9″,”Accept-Language”: “zh-CN,zh;q=0.9″}session = requests.Sessionsession.headers = headersres = session.get(“http://www.dianping.com/shenzhen/ch30″)下面我们使用BeautifulSoup解析下载的页面,构建DOM树:from bs4 import BeautifulSoupsoup = BeautifulSoup(res.text, ‘html5lib’)关于BeautifulSoup可以查看官方文档:https://beautifulsoup.readthedocs.io/zh_CN/v4.4.0/https://www.crummy.com/software/BeautifulSoup/bs4/doc.zh/(上面两个链接内容一样,目录形式有区别)解析顶部导航栏分类和地点列表 由于现在该团购网站翻第二页就要求登录,咱们也没有打算真的要爬它。所以我通过多下载几个分类链接,来模拟批量下载的效果。下面准备解析出下面这些对应的标题:通过xpath查询工具获取到xpath后,就可以转换为css选择器。分类列表:# //div[@id=’classfy’]/a/spantype_list = for a_tag in soup.select(“div#classfy > a”):type_list.append((a_tag.span.text, a_tag[‘href’]))type_list[(‘按摩/足疗’, ‘http://www.dianping.com/shenzhen/ch30/g141′),(‘KTV’, ‘http://www.dianping.com/shenzhen/ch30/g135′),(‘洗浴/汗蒸’, ‘http://www.dianping.com/shenzhen/ch30/g140′),(‘酒吧’, ‘http://www.dianping.com/shenzhen/ch30/g133′),(‘运动健身’, ‘http://www.dianping.com/shenzhen/ch30/g2636′),(‘茶馆’, ‘http://www.dianping.com/shenzhen/ch30/g134′),(‘密室’, ‘http://www.dianping.com/shenzhen/ch30/g2754′),(‘团建拓展’, ‘http://www.dianping.com/shenzhen/ch30/g34089′),(‘采摘/农家乐’, ‘http://www.dianping.com/shenzhen/ch30/g20038′),(‘剧本杀’, ‘http://www.dianping.com/shenzhen/ch30/g50035′),(‘游戏厅’, ‘http://www.dianping.com/shenzhen/ch30/g137′),(‘DIY手工坊’, ‘http://www.dianping.com/shenzhen/ch30/g144′),(‘私人影院’, ‘http://www.dianping.com/shenzhen/ch30/g20041′),(‘轰趴馆’, ‘http://www.dianping.com/shenzhen/ch30/g20040′),(‘网吧/电竞’, ‘http://www.dianping.com/shenzhen/ch30/g20042′),(‘VR’, ‘http://www.dianping.com/shenzhen/ch30/g33857′),(‘桌面游戏’, ‘http://www.dianping.com/shenzhen/ch30/g6694′),(‘棋牌室’, ‘http://www.dianping.com/shenzhen/ch30/g32732′),(‘文化艺术’, ‘http://www.dianping.com/shenzhen/ch30/g142′),(‘新奇体验’, ‘http://www.dianping.com/shenzhen/ch30/g34090′)]地点列表:# //div[@id=’region-nav’]/a/spanarea_list = for a_tag in soup.select(“div#region-nav > a”):area_list.append((a_tag.span.text, a_tag[‘href’]))area_list[(‘福田区’, ‘http://www.dianping.com/shenzhen/ch30/r29′),(‘南山区’, ‘http://www.dianping.com/shenzhen/ch30/r31′),(‘罗湖区’, ‘http://www.dianping.com/shenzhen/ch30/r30′),(‘盐田区’, ‘http://www.dianping.com/shenzhen/ch30/r32′),(‘龙华区’, ‘http://www.dianping.com/shenzhen/ch30/r12033′),(‘龙岗区’, ‘http://www.dianping.com/shenzhen/ch30/r34′),(‘宝安区’, ‘http://www.dianping.com/shenzhen/ch30/r33′),(‘坪山区’, ‘http://www.dianping.com/shenzhen/ch30/r12035′),(‘光明区’, ‘http://www.dianping.com/shenzhen/ch30/r89951′),(‘南澳大鹏新区’, ‘http://www.dianping.com/shenzhen/ch30/r12036′)]解析字体对应css的下载URL经观察可以发现,定义自定义字体的css文件在链接带有svgtextcss关键字的url中:我们可以从所有的定义css样式的链接中找到含有svgtextcss关键字的链接:from urllib import parsedef getUrlFromNode(nodes, tag):for node in nodes:url = node[‘href’]if url.find(tag) != -1:return parse.urljoin(base_url, url)def get_css_url(soup):css_url = getUrlFromNode(soup.select(“head > link[rel=stylesheet]”), “svgtextcss”)return css_urlcss_url = get_css_url(soup)css_url’http://s3plus.meituan.net/v1/mss_0a06a471f9514fc79c981b5466f56b91/svgtextcss/18379bbeb1f5bf54c52bb1d8b71d4fb1.css’解析css获取自定义字体的URL格式化定义字体的css文件:可以看到,class定义了使用的字体名称,font-face定义了每个字体名称对应的字体文件。虽然现在我们可以看到规律每个class就是加了一个PingFangSC-Regular-的前缀作为字体名称,但是我们无法保证以后该网站依然会这样设计,为了保证以后在这个点上面不需要改代码,我们依然还是解析出每个class对应的font-family,再解析出每个font-family对应的多个字体URL,最终多个字体URL取后缀为.woff格式的URL,建立class属性到woff字体的映射关系。下面是完整代码:import redef get_url(urls, tag, only_First=True):urls = [parse.urljoin(base_url, url)for url in urls if tag is None or url.find(tag) != -1]if urls and only_First:return urls[0]return urlsdef parseCssFontUrl(css_url, tag=None, only_First=True):res = session.get(css_url)rule = {}font_face = {}for name, value in re.findall(“([^{}]+){([^{}]+)}”, res.text):name = name.stripfor row in value.split(“;”):if row.find(“:”) == -1:continuek, v = row.split(“:”)k, v = k.strip, v.strip(‘ “\”)if name == “@font-face”:if k == “font-family”:font_name = velif k == “src”:font_face.setdefault(font_name, []).extend(re.findall(“url\(\”([^()]+)\”\)”, v))else:rule[name[1:]] = vfont_urls = {}for class_name, tag_name in rule.items:font_urls[class_name] = get_url(font_face[tag_name], tag)return font_urlsfont_urls = parseCssFontUrl(css_url, “.woff”, only_First=False)font_urls{‘shopNum': ‘http://s3plus.meituan.net/v1/mss_73a511b8f91f43d0bdae92584ea6330b/font/89e46c52.woff’,’tagName': ‘http://s3plus.meituan.net/v1/mss_73a511b8f91f43d0bdae92584ea6330b/font/f8536a55.woff’,’reviewTag': ‘http://s3plus.meituan.net/v1/mss_73a511b8f91f43d0bdae92584ea6330b/font/0373a060.woff’,’address': ‘http://s3plus.meituan.net/v1/mss_73a511b8f91f43d0bdae92584ea6330b/font/f8536a55.woff’}下载字体我们可以将上述四个字体都下载下来看看:def download_file(url, out_name=None):if out_name is None:out_name = url[url.rfind(“/”)+1:]with open(out_name, “wb”) as f:f.write(session.get(url).content)for class_name, url in font_urls.items:download_file(url, f”{class_name}.woff”)下载后得到4个字体文件:想要本地查看字体,我们可以通过FontCreator字体设计工具,百度一下可以直接搜索到下载链接。打开后:经过对比发现四个文件的点阵图顺序完全一致,不同的只是编码与点阵图的关系。建立自定义字体映射关系下面我们需要分析对于指定字体每个被定义的Unicode字符对应的真实字符。由于字体文件中存储的字符的点阵图,本质是图片而不是文本,所以我们无法复制出来。但我们可以考虑通过PIL加载自定义字体,然后将每个被定义的Unicode字符画出相应的点阵图,再进行图像识别,就可以获取相应的文本数据了。这里需要使用fontTools工具,可以直接使用pip安装。详见:https://github.com/fonttools/fonttools以class等于tagName的字体为例,先获取其被定义的Unicode字符列表:from fontTools.ttLib import TTFonttfont = TTFont(“tagName.woff”)# 去掉前2个扩展字符uni_list = tfont.getGlyphOrder[2:]print(uni_list[:10], len(uni_list))[‘uniec3e’, ‘unif3fc’, ‘uniea1f’, ‘unie7f7′, ‘unie258′, ‘unif5aa’, ‘unif48c’, ‘unif088′, ‘unif588′, ‘unif82e’] 601这里打印了前10个Unicode代码点,共有601个自定义字符。打印结果也与上面的截图中FontCreator字体设计工具查看的结果一致。使用PIL绘图工具,先绘制前5个代码点测试一下:from PIL import ImageFont, Image, ImageDrawfont = ImageFont.truetype(“tagName.woff”, 20)for uchar in uni_list[:5]:unknown_char = f”\\u{uchar[3:]}”.encode.decode(“unicode_escape”)im = Image.new(mode=’RGB’, size=(22, 20), color=”white”)draw = ImageDraw.Draw(im=im)draw.text(xy=(5, -5), text=unknown_char, fill=0, font=font)display(im)绘制结果:可以看到能够正确绘制出相应的点阵图。下面再测试每n个代码点为一组一起绘制,减少后面图像识别的次数(这里设置n=25,绘制5组):n = 25font = ImageFont.truetype(“tagName.woff”, 20)for i in range(0, 5*n, n):im = Image.new(mode=’RGB’, size=(20*n+10, 22), color=”white”)draw = ImageDraw.Draw(im=im)unknown_chars = “”.join(uni_list[i:i + n]).replace(“uni”, “\\u”)unknown_chars = unknown_chars.encode.decode(“unicode_escape”)draw.text(xy=(5, -4), text=unknown_chars, fill=0, font=font)display(im)绘制结果:封装一下,批量获取一个字体文件的全部图片对象:from fontTools.ttLib import TTFontfrom PIL import ImageFont, Image, ImageDrawdef getCustomFontGroupImgs(font_file, uni_list=None, group_num=25):if uni_list is None:tfont = TTFont(font_file)uni_list = tfont.getGlyphOrder[2:]imgs = font = ImageFont.truetype(font_file, 20)for i in range(0, len(uni_list), group_num):im = Image.new(mode=’RGB’, size=(20*group_num+10, 22), color=”white”)draw = ImageDraw.Draw(im=im)unknown_chars = “”.join(uni_list[i:i + group_num]).replace(“uni”, “\\u”)unknown_chars = unknown_chars.encode.decode(“unicode_escape”)draw.text(xy=(5, -4), text=unknown_chars, fill=0, font=font)imgs.append(im)return imgspytesseract默认不支持对中文的识别,需要较多的配置。这次我们直接使用一个最近比较流行的库叫带带弟弟orc来进行图像识别,一行命令即可安装:pip install ddddocr使用示例和参数可以查看:https://pypi.org/project/ddddocr/不过该库只支持传图片字节和base64编码,不支持直接传入图片对象,需要二次转换。可以定义一个将图片转字节的方法:from io import BytesIOdef get_img_bytes(img):img_byte = BytesIOim.save(img_byte, format=’JPEG’) # format: PNG or JPEGreturn img_byte.getvalue # im对象转为二进制流然后就可以以如下形式进行批量识别:from ddddocr import DdddOcrimgs = getCustomFontGroupImgs(‘shopNum.woff’, group_num=50)ocr = DdddOcrresult = for im in imgs:display(im)text = ocr.classification(get_img_bytes(im))print(text)result.append(text)效果如下:整体来说准确率还是非常高的。我最终还是决定直接继承DdddOcr类,重写识别方法优化算法(以后再考虑自行开发图像识别类):from ddddocr import DdddOcr, npclass OCR(DdddOcr):def __init__(self):super.__init__def ocr(self, image):image = image.resize((int(image.size[0] * (64 / image.size[1])), 64), Image.ANTIALIAS).convert(‘L’)image = np.array(image).astype(np.float32)image = np.expand_dims(image, axis=0) / 255.image = (image – 0.5) / 0.5ort_inputs = {‘input1′: np.array([image])}ort_outs = self._DdddOcr__ort_session.run(None, ort_inputs)result = last_item = 0for item in ort_outs[0][0]:if item == 0 or item == last_item:continueresult.append(self._DdddOcr__charset[item])last_item = itemreturn ”.join(result)然后这样调用:imgs = getCustomFontGroupImgs(‘shopNum.woff’, group_num=42)ocr = OCRresult = for im in imgs:display(im)text = ocr.ocr(im)print(text)result.append(text)可以看到经过继承调整后的代码,识别准确率更高了些。最终我们人工校对修改后,得到如下字符集:words = ‘1234567890店中美家馆小车大市公酒行国品发电金心业商司超生装园场食有新限天面工服海华水房饰城乐汽香部利子老艺花专东肉菜学福饭人百餐茶务通味所山区门药银农龙停尚安广鑫一容动南具源兴鲜记时机烤文康信果阳理锅宝达地儿衣特产西批坊州牛佳化五米修爱北养卖建材三会鸡室红站德王光名丽油院堂烧江社合星货型村自科快便日民营和活童明器烟育宾精屋经居庄石顺林尔县手厅销用好客火雅盛体旅之鞋辣作粉包楼校鱼平彩上吧保永万物教吃设医正造丰健点汤网庆技斯洗料配汇木缘加麻联卫川泰色世方寓风幼羊烫来高厂兰阿贝皮全女拉成云维贸道术运都口博河瑞宏京际路祥青镇厨培力惠连马鸿钢训影甲助窗布富牌头四多妆吉苑沙恒隆春干饼氏里二管诚制售嘉长轩杂副清计黄讯太鸭号街交与叉附近层旁对巷栋环省桥湖段乡厦府铺内侧元购前幢滨处向座下澩凤港开关景泉塘放昌线湾政步宁解白田町溪十八古双胜本单同九迎第台玉锦底后七斜期武岭松角纪朝峰六振珠局岗洲横边济井办汉代临弄团外塔杨铁浦字年岛陵原梅进荣友虹央桂沿事津凯莲丁秀柳集紫旗张谷的是不了很还个也这我就在以可到错没去过感次要比觉看得说常真们但最喜哈么别位能较境非为欢然他挺着价那意种想出员两推做排实分间甜度起满给热完格荐喝等其再几只现朋候样直而买于般豆量选奶打每评少算又因情找些份置适什蛋师气你姐棒试总定啊足级整带虾如态且尝主话强当更板知己无酸让入啦式笑赞片酱差像提队走嫩才刚午接重串回晚微周值费性桌拍跟块调糕’字体文件中的Unicode代码点则与上述字符集字符一一对应。由于该网站所有的自定义字体的点阵图都是这个顺序,所以我们不再需要解析其他的字体文件获取这个字符列表。当然这个团购网站以后还打算变态到每个字体文件的点阵图顺序也随机,那我只能说,真狠。那到时候我再考虑升级自己的代码,因为我个人的目标就是没有我解析不了的数据。有了点阵图对应的字符集,咱们就可以轻松建立字体文件的映射关系:from fontTools.ttLib import TTFontfont_data = TTFont(“tagName.woff”)uni_list = font_data.getGlyphOrder[2:]font_map = dict(zip(map(lambda x: x[3:], uni_list), words))字体缓存器针对该团购网站,由于我们无法保证所有页面用这一个相同的css文件,所以我们需要建立一个css的URL到字体文件URL和字体文件URL到对应字体映射关系的二级缓存:from io import BytesIOurl2FontMapCache = {}css2FontCache = {}def getFontMapFromURL(font_url):”缓存字体URL对应字体映射关系”if font_url not in url2FontMapCache:font_bytes = BytesIO(session.get(font_url).content)font_data = TTFont(font_bytes)uni_list = font_data.getGlyphOrder[2:]url2FontMapCache[font_url] = dict(zip(map(lambda x: x[3:], uni_list), words))return url2FontMapCache[font_url]def getFontMapFromClassName(class_name, css_url):”缓存指定css文件对应字体URL”if css_url not in css2FontCache:css2FontCache[css_url] = parseCssFontUrl(css_url, “.woff”)font_url = css2FontCache[css_url].get(class_name)return getFontMapFromURL(font_url)可以获取当前页面下,每个自定义字体的映射关系:for class_name in font_urls.keys:font_map = getFontMapFromClassName(class_name, css_url)print(list(font_map.items)[:12])结果:[(‘e0a7′, ‘1’), (‘ebf3′, ‘2’), (‘ee9b’, ‘3’), (‘e7e4′, ‘4’), (‘f5f8′, ‘5’), (‘e7a1′, ‘6’), (‘ef49′, ‘7’), (‘eef7′, ‘8’), (‘f7e0′, ‘9’), (‘e633′, ‘0’), (‘e5de’, ‘店’), (‘e67f’, ‘中’)][(‘ec3e’, ‘1’), (‘f3fc’, ‘2’), (‘ea1f’, ‘3’), (‘e7f7′, ‘4’), (‘e258′, ‘5’), (‘f5aa’, ‘6’), (‘f48c’, ‘7’), (‘f088′, ‘8’), (‘f588′, ‘9’), (‘f82e’, ‘0’), (‘e7c5′, ‘店’), (‘e137′, ‘中’)][(‘e3e0′, ‘1’), (‘e85f’, ‘2’), (‘f3c8′, ‘3’), (‘f3d5′, ‘4’), (‘e771′, ‘5’), (‘f251′, ‘6’), (‘f6f6′, ‘7’), (‘e8da’, ‘8’), (‘ea58′, ‘9’), (‘f8fb’, ‘0’), (‘ef9b’, ‘店’), (‘f3dd’, ‘中’)][(‘ec3e’, ‘1’), (‘f3fc’, ‘2’), (‘ea1f’, ‘3’), (‘e7f7′, ‘4’), (‘e258′, ‘5’), (‘f5aa’, ‘6’), (‘f48c’, ‘7’), (‘f088′, ‘8’), (‘f588′, ‘9’), (‘f82e’, ‘0’), (‘e7c5′, ‘店’), (‘e137′, ‘中’)]将所有自定义字体全部替换为正常文字 有了字体映射关系,我们就可以对页面的自定义字体替换成我们解析好的文本数据。首先获取被替换的父节点列表,方便对比:b_tags = [svgmtsi.parent for svgmtsi in soup.find_all(‘svgmtsi’)]b_tags虽然我们现在看到该网站每个svgmtsi标签只存放一个字符,但无法确保以后也依然如此,所以我们的代码现在就考虑一个svgmtsi标签内部存在多个字符的情况。执行替换:for svgmtsi in soup.find_all(‘svgmtsi’):class_name = svgmtsi[‘class’][0]font_map = getFontMapFromClassName(class_name, css_url)chars = for c in svgmtsi.text:char = c.encode(“unicode_escape”).decode[2:]chars.append(font_map[char])svgmtsi.replaceWith(“”.join(chars))替换后,再查看之前保存的节点:b_tags提取数据将自定义字体替换之后,我们就可以非常丝滑的提取需要的数据了:num_rule = re.compile(“\d+”)for li_tag in soup.select(“div#shop-all-list div.txt”):title = li_tag.select_one(“div.tit>a>h4″).texturl = li_tag.select_one(“div.tit>a”)[“href”]star_class = li_tag.select_one(“div.comment>div.nebula_star>div.star_icon>span”)[“class”]star = int(num_rule.findall(” “.join(star_class))[0])//10comment_tag = li_tag.select_one(“div.comment>a.review-num>b”)comment_num = comment_tag.text if comment_tag else Nonemean_price_tag = li_tag.select_one(“div.comment>a.mean-price>b”)mean_price = mean_price_tag.text if mean_price_tag else Nonefun_type = li_tag.select_one(“div.tag-addr>a:nth-of-type(1)>span.tag”).textarea = li_tag.select_one(“div.tag-addr>a:nth-of-type(2)>span.tag”).textprint(title, url, star, comment_num, mean_price, fun_type, area)轰趴天台·大白之家(南山店) http://www.dianping.com/shop/k1lqueFI6sIOfjnI 5 129 ¥196 轰趴馆 华侨城巨鹿搏击俱乐部(车公庙店) http://www.dianping.com/shop/k4tmabQaordrq6Tm 5 238 ¥246 拳击 车公庙SWING CAGE 棒球击球笼&冲浪滑板碗池 http://www.dianping.com/shop/l2lUP0rvcLPy4ebm 5 1088 ¥85 体育场馆 科技园微醺云深处沉浸式剧场 http://www.dianping.com/shop/HaLzYuXfvUWmPLSz 0 8 None 剧本杀 科技园逐见有光Chandelle http://www.dianping.com/shop/H2CSpvtn70y12wNh 5 138 ¥281 DIY手工坊 市中心/会展中心cozy cozy银饰DIY手作室(万象城店) http://www.dianping.com/shop/l3mCIs6dSrdL9jo7 5 1270 ¥321 DIY手工坊 万象城博哥的小剧场沉浸推理体验馆(南山万象天地店) http://www.dianping.com/shop/H3S4zD55e1gmfb8E 3 18 None 剧本杀 科技园FlowLife拓极滑板冲浪俱乐部(蛇口旗舰店) http://www.dianping.com/shop/G9sHgWISxYtBXz79 5 481 ¥217 新奇体验 蛇口Doors秘道·独立剧情密室(车公庙分店) http://www.dianping.com/shop/k4O3oDj6BwLtbgD4 5 878 ¥101 密室 车公庙御隆茶馆 http://www.dianping.com/shop/H6HEuBttJKlMkaAn 0 3 None 棋牌室 南头【十万伏特】手创空间 自由DIY http://www.dianping.com/shop/k5OURy1bNIs7ed7v 5 271 ¥152 DIY手工坊 梅林八町桑BATTING SOUND 棒球体验馆 http://www.dianping.com/shop/k9yQRAmYoa3o8cLI 5 734 ¥114 新奇体验 车公庙星美棋牌 http://www.dianping.com/shop/l4JRIjqLWi2zeFQd 3 9 None 棋牌室 国贸ZUO STUDIO烘焙课程· 茶歇蛋糕订购(南山京基百纳广场… http://www.dianping.com/shop/ER0EyDpjx36ekF0G 5 687 ¥224 DIY手工坊 白石洲cozy cozy银饰DIY手作室(南山店) http://www.dianping.com/shop/G7MbwkosLSvS3X1I 5 431 ¥338 DIY手工坊 南头批量下载经过以上测试,我们可以将所有相关方法都封装一下,下面我们下载深圳华南城的所有娱乐相关的团购信息:import refrom bs4 import BeautifulSoupimport requestsimport pandas as pdimport randomimport timefrom urllib import parsefrom io import BytesIOfrom fontTools.ttLib import TTFonturl2FontMapCache = {}css2FontCache = {}words = ‘1234567890店中美家馆小车大市公酒行国品发电金心业商司超生装园场食有新限天面工服海华水房饰城乐汽香部利子老艺花专东肉菜学福饭人百餐茶务通味所山区门药银农龙停尚安广鑫一容动南具源兴鲜记时机烤文康信果阳理锅宝达地儿衣特产西批坊州牛佳化五米修爱北养卖建材三会鸡室红站德王光名丽油院堂烧江社合星货型村自科快便日民营和活童明器烟育宾精屋经居庄石顺林尔县手厅销用好客火雅盛体旅之鞋辣作粉包楼校鱼平彩上吧保永万物教吃设医正造丰健点汤网庆技斯洗料配汇木缘加麻联卫川泰色世方寓风幼羊烫来高厂兰阿贝皮全女拉成云维贸道术运都口博河瑞宏京际路祥青镇厨培力惠连马鸿钢训影甲助窗布富牌头四多妆吉苑沙恒隆春干饼氏里二管诚制售嘉长轩杂副清计黄讯太鸭号街交与叉附近层旁对巷栋环省桥湖段乡厦府铺内侧元购前幢滨处向座下澩凤港开关景泉塘放昌线湾政步宁解白田町溪十八古双胜本单同九迎第台玉锦底后七斜期武岭松角纪朝峰六振珠局岗洲横边济井办汉代临弄团外塔杨铁浦字年岛陵原梅进荣友虹央桂沿事津凯莲丁秀柳集紫旗张谷的是不了很还个也这我就在以可到错没去过感次要比觉看得说常真们但最喜哈么别位能较境非为欢然他挺着价那意种想出员两推做排实分间甜度起满给热完格荐喝等其再几只现朋候样直而买于般豆量选奶打每评少算又因情找些份置适什蛋师气你姐棒试总定啊足级整带虾如态且尝主话强当更板知己无酸让入啦式笑赞片酱差像提队走嫩才刚午接重串回晚微周值费性桌拍跟块调糕’num_rule = re.compile(“\d+”)def get_url(urls, tag, only_First=True):urls = [parse.urljoin(base_url, url)for url in urls if tag is None or url.find(tag) != -1]if urls and only_First:return urls[0]return urlsdef parseCssFontUrl(css_url, tag=None, only_First=True):res = session.get(css_url)rule = {}font_face = {}for name, value in re.findall(“([^{}]+){([^{}]+)}”, res.text):name = name.stripfor row in value.split(“;”):if row.find(“:”) == -1:continuek, v = row.split(“:”)k, v = k.strip, v.strip(‘ “\”)if name == “@font-face”:if k == “font-family”:font_name = velif k == “src”:font_face.setdefault(font_name, []).extend(re.findall(“url\(\”([^()]+)\”\)”, v))else:rule[name[1:]] = vfont_urls = {}for class_name, tag_name in rule.items:font_urls[class_name] = get_url(font_face[tag_name], tag)return font_urlsdef getFontMapFromURL(font_url):”缓存字体URL对应字体映射关系”if font_url not in url2FontMapCache:font_bytes = BytesIO(session.get(font_url).content)font_data = TTFont(font_bytes)uni_list = font_data.getGlyphOrder[2:]url2FontMapCache[font_url] = dict(zip(map(lambda x: x[3:], uni_list), words))return url2FontMapCache[font_url]def getFontMapFromClassName(class_name, css_url):”缓存指定css文件对应字体URL”if css_url not in css2FontCache:css2FontCache[css_url] = parseCssFontUrl(css_url, “.woff”)font_url = css2FontCache[css_url].get(class_name)return getFontMapFromURL(font_url)def parse_data(soup):result = for li_tag in soup.select(“div#shop-all-list div.txt”):title = li_tag.select_one(“div.tit>a>h4″).texturl = li_tag.select_one(“div.tit>a”)[“href”]star_class = li_tag.select_one(“div.comment>div.nebula_star>div.star_icon>span”)[“class”]star = int(num_rule.findall(” “.join(star_class))[0])//10comment_tag = li_tag.select_one(“div.comment>a.review-num>b”)comment_num = comment_tag.text if comment_tag else Nonemean_price_tag = li_tag.select_one(“div.comment>a.mean-price>b”)mean_price = mean_price_tag.text if mean_price_tag else Nonefun_type = li_tag.select_one(“div.tag-addr>a:nth-of-type(1)>span.tag”).textarea = li_tag.select_one(“div.tag-addr>a:nth-of-type(2)>span.tag”).textresult.append((title, star, comment_num,mean_price, fun_type, area, url))return resultdef getUrlFromNode(nodes, tag):for node in nodes:url = node[‘href’]if url.find(tag) != -1:return parse.urljoin(base_url, url)def get_css_url(soup):css_url = getUrlFromNode(soup.select(“head > link[rel=stylesheet]”), “svgtextcss”)return css_urldef fix_text(soup):css_url = get_css_url(soup)for svgmtsi in soup.find_all(‘svgmtsi’):class_name = svgmtsi[‘class’][0]font_map = getFontMapFromClassName(class_name, css_url)chars = for c in svgmtsi.text:char = c.encode(“unicode_escape”).decode[2:]chars.append(font_map[char])svgmtsi.replaceWith(“”.join(chars))headers = {“Connection”: “keep-alive”,”Cache-Control”: “max-age=0″,”Upgrade-Insecure-Requests”: “1”,”User-Agent”: “Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36″,”Accept”: “text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9″,”Accept-Language”: “zh-CN,zh;q=0.9″}session = requests.Sessionsession.headers = headersbase_url = “http://www.dianping.com/shenzhen/ch30″res = session.get(base_url)soup = BeautifulSoup(res.text, ‘html5lib’)type_list = for a_tag in soup.select(“div#classfy > a”):type_list.append((a_tag.span.text, a_tag[‘href’]+’r91172′))result = for type_name, url in type_list:print(type_name, url)res = session.get(url)soup = BeautifulSoup(res.text, ‘html5lib’)fix_text(soup)result.extend(parse_data(soup))time.sleep(random.randint(2, 4))df = pd.DataFrame(result, columns=[“标题”, “星级”, “评论数”, “均价”, “娱乐类型”, “区域”, “链接”])df.评论数 = df.评论数.apply(lambda x: int(x) if x else pd.NA)df.均价 = df.均价.str[1:].apply(lambda x: int(x) if x else pd.NA)df.drop_duplicates(inplace=True)df.to_excel(“华南城娱乐.xlsx”, index=False)爬取结果(有一定的二次编辑):总结整体来说,该团购网站的反爬机制还是挺猛的,费了九牛二虎之力也就只能每个栏目爬一页数据,还没有地址,推荐各位不要去爬了。不过本文的目的就是演示把最难的字体反爬给解决掉,希望本文已经达到这个目标,如果后面还有更难的字体反爬网站出现,再继续更深的剖析,见招拆招。希望各位小伙伴,在研究完本文后,能够应对任何字体反爬问题。 版权声明:本文为CSDN博主「小小明-代码实体」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。原文链接:https://blog.csdn.net/as604049322/article/details/119333427小伙伴们,快快用实践一下吧!
本文出自快速备案,转载时请注明出处及相应链接。