使用interpreter

国内环境使用https://github.com/KillianLucas/open-interpreter

发现只是修改openai库的api.openai.com为api.openai-sb.com不行,还需要修改litellm库中的main文件,将其中的

api.openai.com改为api.openai-sb.com。

其次,在使用的时候会遇到调用的包不存在,需要设置代理或者镜像源,这里只需要在发送语句的时候增加一段话,如果需要pip install安装包请增加--proxy http://127.0.0.1:7890以设置代理即可。

不过好像进程会卡住

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
Microsoft Windows [版本 10.0.19043.1826]
(c) Microsoft Corporation。保留所有权利。

D:\python_work\work\auto_interpreter>conda activate py310

(py310) D:\python_work\work\auto_interpreter>interpreter -y

Welcome to Open Interpreter.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
▌ OpenAI API key not found

To use GPT-4 (recommended) please provide an OpenAI API key.

To use Code-Llama (free but less capable) press enter.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
OpenAI API key: xxxx

Tip: To save this key for later, run export OPENAI_API_KEY=your_api_key on Mac/Linux or setx OPENAI_API_KEY your_api_keyon Windows.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────> 请总结一下这篇文章的内容,文章链接为:https://www.bilibili.com/read/cv17233088/ 结果请用中文输出。如果需要通过pip安 装包的话,请在pip install后面添加--proxy http://127.0.0.1:7890设置代理

首先,我们需要制定一个计划来完成这个任务。

1 首先,我们需要安装一些必要的Python包,如requests和beautifulsoup4,这些包可以帮助我们从网页中获取和
解析数据。由于用户需要使用代理,我们将在安装这些包时添加代理设置。
2 然后,我们将使用requests包来获取文章的HTML内容。
3 接下来,我们将使用beautifulsoup4来解析HTML内容,提取文章的主要内容。
4 最后,我们将使用Python的自然语言处理库,如jieba和gensim,来总结文章的内容。如果这些库尚未安装,我们将安装它
们,并在安装时添加代理设置。

现在,我们开始执行第一步,安装必要的Python包。



pip install --proxy http://127.0.0.1:7890 requests beautifulsoup4


Microsoft Windows [版本 10.0.19043.1826]
(c) Microsoft Corporation。保留所有权利。
(py310) D:\python_work\work\auto_interpreter>echo "ACTIVE_LINE:1"
"ACTIVE_LINE:1"
(py310) D:\python_work\work\auto_interpreter>pip install --proxy http://127.0.0.1:7890 requests beautifulsoup4
Requirement already satisfied: requests in d:\downloads\anaconda\envs\py310\lib\site-packages (2.31.0)
Collecting beautifulsoup4
Using cached beautifulsoup4-4.12.2-py3-none-any.whl (142 kB)
Requirement already satisfied: charset-normalizer<4,>=2 in d:\downloads\anaconda\envs\py310\lib\site-packages (from
requests) (3.2.0)
Requirement already satisfied: idna<4,>=2.5 in d:\downloads\anaconda\envs\py310\lib\site-packages (from requests)
(3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in d:\downloads\anaconda\envs\py310\lib\site-packages (from
requests) (2.0.4)
Requirement already satisfied: certifi>=2017.4.17 in d:\downloads\anaconda\envs\py310\lib\site-packages (from
requests) (2023.7.22)
Collecting soupsieve>1.2 (from beautifulsoup4)
Obtaining dependency information for soupsieve>1.2 from
https://files.pythonhosted.org/packages/4c/f3/038b302fdfbe3be7da016777069f26ceefe11a681055ea1f7817546508e3/soupsieve
-2.5-py3-none-any.whl.metadata
Downloading soupsieve-2.5-py3-none-any.whl.metadata (4.7 kB)
Downloading soupsieve-2.5-py3-none-any.whl (36 kB)
Installing collected packages: soupsieve, beautifulsoup4
Successfully installed beautifulsoup4-4.12.2 soupsieve-2.5
(py310) D:\python_work\work\auto_interpreter>


我们已经成功安装了requests和beautifulsoup4这两个Python包。

接下来,我们将执行第二步,使用requests包来获取文章的HTML内容。我们将使用用户提供的URL来获取文章。



import requests


import requests


import requests


import requests

url = 'https://www.bilibili.com/read/cv17233088/'
response = requests.get(url)
response.text


Output truncated. Showing the last 2000 characters.

"color-purple-03\\"\\u003E脆弱意志\\u003C\\u002Fspan\\u003E击中后的三回合内应手动进食保持血量为满。\\u003C\\u002Fp\\
03E\\u003Cp\\u003E读条\\u003Cspan
class=\\"color-purple-03\\"\\u003E千钧之力\\u003C\\u002Fspan\\u003E时开始疯狂进食,因为手动进食有延迟。\\u003C\\u002
Fp\\u003E\\u003Cfigure class=\\"img-box\\" contenteditable=\\"false\\"\\u003E\\u003Cimg
data-src=\\"\\u002F\\u002Fi0.hdslb.com\\u002Fbfs\\u002Farticle\\u002F02db465212d3c374a43c60fa2625cc1caeaab796.png\\"
class=\\"cut-off-6\\"\\u003E\\u003C\\u002Ffigure\\u003E\\u003Ch1\\u003E战斗毕业之后做什么 -
100%进度\\u003Cbr\\u003E\\u003C\\u002Fh1\\u003E\\u003Cp\\u003E战斗毕业之后,应该回头将各技能的精通等级刷满,试着开全
003E\\u003Cp\\u003E比较容易漏的是灵巧的障碍,每个障碍都有自己的精通等级。\\u003C\\u002Fp\\u003E\\u003Cp\\u003E还有就
式重复度过高不好查。\\u003C\\u002Fp\\u003E\\u003Cp\\u003E当前版本100%进度会获得完美斗篷的购买权限,有助于下个版本迅
003C\\u002Fp\\u003E","keywords":"屠杀者,里亚斯,新方向,格拉西,拉格纳,除此以外,并没有","version_id":0,"d
yn_id_str":"674810770333106177","total_art_num":92},"readViewInfo":{"total":
92},"upInfo":{"fans":975},"catalogList":[],"stats":{},"actInfo":{},"recommen
dInfoList":[],"previewInfo":{},"draftInfo":{},"query":{}};(function(){var
s;(s=document.currentScript||document.scripts).parentNode.removeChild(s);}());</script><script
src="//s1.hdslb.com/bfs/static/jinkela/article-web/1.article-web.5fe1459b4578e6f681d853cb35ea7d20d2081cb1.js"
defer></script><script
src="//s1.hdslb.com/bfs/static/jinkela/article-web/article-web.5fe1459b4578e6f681d853cb35ea7d20d2081cb1.js"
defer></script><script src="//s1.hdslb.com/bfs/seed/jinkela/commentpc/comment.min.js"></script><script
src="//s1.hdslb.com/bfs/seed/laputa-header/bili-header.umd.js"></script><script>var
isPreview=/\\/(preview|pcpreview)/.test(window.location.href);if(isPreview)document.querySelector("#bili-header-cont
ainer").remove();else{var el=document.getElementById("bili-header-container"),header=new
BiliHeader({config:{headerType:"mini",disableSticky:!0,tokenSupport:!0}});header.init(el)}</script></body></html>'


我们已经成功获取了文章的HTML内容。

接下来,我们将执行第三步,使用beautifulsoup4来解析HTML内容,提取文章的主要内容。我们将提取文章的标题和正文内容。



from bs4 import BeautifulSoup

soup = BeautifulSoup(response.text, 'html.parser')
article_title = soup.find('h1').text
article_content = soup.find('div', {'class': 'article-holder'}).text
article_title, article_content[:500]


File "<stdin>", line 10, in <module>
AttributeError: 'NoneType' object has no attribute 'text'


看起来我们在提取文章内容时遇到了问题。可能的原因是文章的HTML结构与我们预期的不同,或者文章的内容被动态加载,无法通过
的HTTP请求获取。

为了解决这个问题,我们可以尝试使用一个更强大的工具,如Selenium,它可以模拟浏览器行为,包括处理JavaScript和动态内容。
然而,Selenium需要一个浏览器驱动程序才能工作,这可能需要用户的额外输入。

另一种可能的解决方案是分析网页的网络请求,看看是否有任何请求返回了我们需要的数据。这可能需要一些额外的工作,但是如果
,我们可以直接获取到数据,而无需处理HTML。

在这种情况下,我建议我们先尝试使用Selenium。如果用户同意,我们将需要安装Selenium
Python包,以及一个浏览器驱动程序。我们将使用Chrome浏览器和对应的ChromeDriver,但是我们需要知道用户的Chrome浏览
器版本号,以便下载正确的驱动程序。

请问用户是否同意我们使用Selenium,并且能否提供Chrome浏览器的版本号?

> 文章的内容在#read-article-holder内,而且他的class不是article-holder

感谢您的反馈。我理解了,我将尝试使用您提供的新的HTML元素ID来提取文章内容。让我们再次尝试提取文章的标题和内容。






article_content = soup.find('div', {'id': 'read-article-holder'}).text
article_content[:500]


'简述战斗进入中后期,高于99的虚拟等级并不会带来更多的属性加成,装备逐渐取代等级成为努力的新方向。刷满100次熔岩洞窟后
解锁第一批古代魔法。第二批古代魔法需要通关四神地牢各99次以分别解锁。解锁炼狱要塞副本,能够获得中期最强近战地狱爪的素
(需要刷10次)和提供伤害减免的炼狱斗篷。第一次通关即解锁的四神地牢能够分别提供三种职业风格的毕业装备或毕业装备升级组
件。遁入迷雾副本会解锁新的屠杀者区域-幽暗水域,该区域中的敌人对攻击风格非常敏感,它们会提供毕业装备升级所需的材料。
购买“通往不洁废土的地图”(商店-屠杀者)后,即可进行湮灭黑暗活动。这个副本只需进行一次即可获得全部奖励,但一次的持续
时间也非常长,硬核模式在该副本中死亡不会被删档。是否需要继续刷熔岩洞窟?拿到熔岩洞窟全部独特奖励,甚至只是拿到所有古
代近战装备需要最多刷1350次。如有可能,前100次之后立即开始刷更强的装备如炼狱爪,刷到之后再考虑继续刷该副本比较合适
。一些通用的装备建议近战风格召唤\xa0DPS:龙+牛头人远程风格召唤\xa0DPS:龙+半人马魔法风格召唤\xa0D
PS:龙+女巫通用召唤\xa0伤害减免:女巫+牦牛祝'


我们已经成功提取了文章的内容。

接下来,我们将执行第四步,使用Python的自然语言处理库,如jieba和gensim,来总结文章的内容。首先,我们需要安装这些库。
我们将在安装时添加代理设置。



pip install --proxy http://127.0.0.1:7890 jieba gensim█


pip install --proxy http://127.0.0.1:7890 jieba gensim█



pip install --proxy http://127.0.0.1:7890 jieba gensim█


(py310) D:\python_work\work\auto_interpreter>echo "ACTIVE_LINE:1"

pip install --proxy http://127.0.0.1:7890 jieba gensim█


(py310) D:\python_work\work\auto_interpreter>echo "ACTIVE_LINE:1"
"ACTIVE_LINE:1"

pip install --proxy http://127.0.0.1:7890 jieba gensim█


(py310) D:\python_work\work\auto_interpreter>echo "ACTIVE_LINE:1"
"ACTIVE_LINE:1"
(py310) D:\python_work\work\auto_interpreter>pip install --proxy http://127.0.0.1:7890 jieba gensim

pip install --proxy http://127.0.0.1:7890 jieba gensim█


(py310) D:\python_work\work\auto_interpreter>echo "ACTIVE_LINE:1"
"ACTIVE_LINE:1"
(py310) D:\python_work\work\auto_interpreter>pip install --proxy http://127.0.0.1:7890 jieba gensim
Collecting jieba
Downloading jieba-0.42.1.tar.gz (19.2 MB)
--------------------------------------- 19.2/19.2 MB 65.5 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting gensim
Obtaining dependency information for gensim from

pip install --proxy http://127.0.0.1:7890 jieba gensim█


Output truncated. Showing the last 2000 characters.

LINE:1"
"ACTIVE_LINE:1"
(py310) D:\python_work\work\auto_interpreter>pip install --proxy http://127.0.0.1:7890 jieba gensim
Collecting jieba
Downloading jieba-0.42.1.tar.gz (19.2 MB)

pip install --proxy http://127.0.0.1:7890 jieba gensim█


Output truncated. Showing the last 2000 characters.

tall --proxy http://127.0.0.1:7890 jieba gensim
Collecting jieba
Downloading jieba-0.42.1.tar.gz (19.2 MB)
--------------------------------------- 19.2/19.2 MB 65.5 MB/s eta 0:00:00

pip install --proxy http://127.0.0.1:7890 jieba gensim█


Output truncated. Showing the last 2000 characters.

ting jieba
Downloading jieba-0.42.1.tar.gz (19.2 MB)
--------------------------------------- 19.2/19.2 MB 65.5 MB/s eta 0:00:00
Preparing metadata (setup.py): started

pip install --proxy http://127.0.0.1:7890 jieba gensim█


Output truncated. Showing the last 2000 characters.

------------------------ 19.2/19.2 MB 65.5 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'

pip install --proxy http://127.0.0.1:7890 jieba gensim█


Output truncated. Showing the last 2000 characters.

s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting gensim

pip install --proxy http://127.0.0.1:7890 jieba gensim█


Output truncated. Showing the last 2000 characters.

started
Preparing metadata (setup.py): finished with status 'done'
Collecting gensim
Obtaining dependency information for gensim from

pip install --proxy http://127.0.0.1:7890 jieba gensim█


Output truncated. Showing the last 2000 characters.

'
Collecting gensim
Obtaining dependency information for gensim from
https://files.pythonhosted.org/packages/ab/b0/d58dc405fd60ab546ca714321235dc2d455b2dc06bfb4fc1092940c749fc/gensim-4.

pip install --proxy http://127.0.0.1:7890 jieba gensim█


Output truncated. Showing the last 2000 characters.

b2dc06bfb4fc1092940c749fc/gensim-4.3.2-cp310-cp310-win_amd64.whl.metadata
Downloading gensim-4.3.2-cp310-cp310-win_amd64.whl.metadata (8.5 kB)

pip install --proxy http://127.0.0.1:7890 jieba gensim█


Output truncated. Showing the last 2000 characters.

tadata (8.5 kB)
Collecting numpy>=1.18.5 (from gensim)
Obtaining dependency information for numpy>=1.18.5 from

pip install --proxy http://127.0.0.1:7890 jieba gensim█


Output truncated. Showing the last 2000 characters.

g numpy>=1.18.5 (from gensim)
Obtaining dependency information for numpy>=1.18.5 from
https://files.pythonhosted.org/packages/b7/db/4d37359e2c9cf8bf071c08b8a6f7374648a5ab2e76e2e22e3b808f81d507/numpy-1.2

pip install --proxy http://127.0.0.1:7890 jieba gensim█


Output truncated. Showing the last 2000 characters.

y>=1.18.5 from
https://files.pythonhosted.org/packages/b7/db/4d37359e2c9cf8bf071c08b8a6f7374648a5ab2e76e2e22e3b808f81d507/numpy-1.2
5.2-cp310-cp310-win_amd64.whl.metadata

pip install --proxy http://127.0.0.1:7890 jieba gensim█


Output truncated. Showing the last 2000 characters.

648a5ab2e76e2e22e3b808f81d507/numpy-1.25.2-cp310-cp310-win_amd64.whl.metadata
Downloading numpy-1.25.2-cp310-cp310-win_amd64.whl.metadata (5.7 kB)

pip install --proxy http://127.0.0.1:7890 jieba gensim█


Output truncated. Showing the last 2000 characters.

648a5ab2e76e2e22e3b808f81d507/numpy-1.25.2-cp310-cp310-win_amd64.whl.metadata
Downloading numpy-1.25.2-cp310-cp310-win_amd64.whl.metadata (5.7 kB)

pip install --proxy http://127.0.0.1:7890 jieba gensim█


Output truncated. Showing the last 2000 characters.

10-cp310-win_amd64.whl.metadata
Downloading numpy-1.25.2-cp310-cp310-win_amd64.whl.metadata (5.7 kB)
Collecting scipy>=1.7.0 (from gensim)

pip install --proxy http://127.0.0.1:7890 jieba gensim█


Output truncated. Showing the last 2000 characters.

10-cp310-win_amd64.whl.metadata
Downloading numpy-1.25.2-cp310-cp310-win_amd64.whl.metadata (5.7 kB)
Collecting scipy>=1.7.0 (from gensim)

pip install --proxy http://127.0.0.1:7890 jieba gensim█


Output truncated. Showing the last 2000 characters.

10-cp310-win_amd64.whl.metadata
Downloading numpy-1.25.2-cp310-cp310-win_amd64.whl.metadata (5.7 kB)
Collecting scipy>=1.7.0 (from gensim)

pip install --proxy http://127.0.0.1:7890 jieba gensim█


Output truncated. Showing the last 2000 characters.

10-cp310-win_amd64.whl.metadata
Downloading numpy-1.25.2-cp310-cp310-win_amd64.whl.metadata (5.7 kB)
Collecting scipy>=1.7.0 (from gensim)

pip install --proxy http://127.0.0.1:7890 jieba gensim


Output truncated. Showing the last 2000 characters.

10-cp310-win_amd64.whl.metadata
Downloading numpy-1.25.2-cp310-cp310-win_amd64.whl.metadata (5.7 kB)
Collecting scipy>=1.7.0 (from gensim)

pip install --proxy http://127.0.0.1:7890 jieba gensim


Output truncated. Showing the last 2000 characters.

10-cp310-win_amd64.whl.metadata
Downloading numpy-1.25.2-cp310-cp310-win_amd64.whl.metadata (5.7 kB)
Collecting scipy>=1.7.0 (from gensim)
Obtaining dependency information for scipy>=1.7.0 from
https://files.pythonhosted.org/packages/70/03/485f73046134400ea25d3cb178c5e6728f9b165f79d09638ecb44ee0e9b1/scipy-1.1
1.2-cp310-cp310-win_amd64.whl.metadata
Downloading scipy-1.11.2-cp310-cp310-win_amd64.whl.metadata (59 kB)
---------------------------------------- 59.1/59.1 kB ? eta 0:00:00
Collecting smart-open>=1.8.1 (from gensim)
Obtaining dependency information for smart-open>=1.8.1 from
https://files.pythonhosted.org/packages/fc/d9/d97f1db64b09278aba64e8c81b5d322d436132df5741c518f3823824fae0/smart_ope
n-6.4.0-py3-none-any.whl.metadata
Downloading smart_open-6.4.0-py3-none-any.whl.metadata (21 kB)
Downloading gensim-4.3.2-cp310-cp310-win_amd64.whl (24.0 MB)
---------------------------------------- 24.0/24.0 MB 22.6 MB/s eta 0:00:00
Downloading numpy-1.25.2-cp310-cp310-win_amd64.whl (15.6 MB)
---------------------------------------- 15.6/15.6 MB 13.6 MB/s eta 0:00:00
Downloading scipy-1.11.2-cp310-cp310-win_amd64.whl (44.0 MB)
---------------------------------------- 44.0/44.0 MB 22.6 MB/s eta 0:00:00
Downloading smart_open-6.4.0-py3-none-any.whl (57 kB)
---------------------------------------- 57.0/57.0 kB ? eta 0:00:00
Building wheels for collected packages: jieba
Building wheel for jieba (setup.py): started
Building wheel for jieba (setup.py): finished with status 'done'
Created wheel for jieba: filename=jieba-0.42.1-py3-none-any.whl size=19314474
sha256=cb6f7fe4e17da686d7cc41ed5a9bd176a4c6b9d6075389037a76f8f74c111580
Stored in directory:
c:\users\ytint\appdata\local\pip\cache\wheels\c9\69\31\d56d90b22a1777b0b231e234b00302a55be255930f8bd92dcd
Successfully built jieba
Installing collected packages: jieba, smart-open, numpy, scipy, gensim
Successfully installed gensim-4.3.2 jieba-0.42.1 numpy-1.25.2 scipy-1.11.2 smart-open-6.4.0
(py310) D:\python_work\work\auto_interpreter>


我们已经成功安装了jieba和gensim这两个Python包。

接下来,我们将使用这些库来总结文章的内容。我们将首先使用jieba进行分词,然后使用gensim的TextRank算法来生成文章的摘要
我们将尝试生成一个包含5个句子的摘要。



import jieba
from gensim.summarization import summarize

# Split the article into sentences
sentences = article_content.split('。')

# Use jieba to tokenize each sentence
tokenized_sentences = [' '.join(jieba.cut(sentence)) for sentence in sentences]

# Join the tokenized sentences back together
tokenized_text = '。'.join(tokenized_sentences)

# Use gensim to generate a summary
summary = summarize(tokenized_text, word_count=100)
summary


File "<stdin>", line 6, in <module>
ModuleNotFoundError: No module named 'gensim.summarization'


看起来我们在尝试导入gensim.summarization模块时遇到了问题。这可能是因为gensim的最新版本(4.x)已经不再包含这个模块。

为了解决这个问题,我们可以尝试使用一个不同的方法来生成文章的摘要。例如,我们可以使用jieba的textrank算法来提取文章的
关键词,然后选择包含这些关键词的句子作为摘要。

让我们尝试这种方法。首先,我们将使用jieba的textrank算法来提取文章的关键词。



import jieba.analyse

# Extract keywords from the article
keywords = jieba.analyse.textrank(article_content, topK=20, withWeight=False)
keywords


Dumping model to file cache C:\Users\ytint\AppData\Local\Temp\jieba.cache
Loading model cost 0.632 seconds.
Prefix dict has been built successfully.
['装备', '魔法', '风格', '需要', '战斗', '进食', '屠杀', '炼狱', '障碍', '区域', '套装', '地牢', '戒指', '斗篷',
'升级', '使用', '副本', '毕业', '间隔', '攻击']


我们已经成功提取了文章的关键词。

接下来,我们将选择包含这些关键词的句子作为摘要。我们将尝试生成一个包含5个句子的摘要。



summary_sentences = []

# Loop over the sentences of the article
for sentence in sentences:
# If the sentence contains a keyword, add it to the summary
if any(keyword in sentence for keyword in keywords):
summary_sentences.append(sentence)
# If we have enough sentences for the summary, stop
if len(summary_sentences) == 5:
break

# Join the summary sentences back together
summary = '。'.join(summary_sentences)
summary


File "<stdin>", line 6, in <module>
NameError: name 'sentences' is not defined


看起来我们在尝试访问变量'sentences'时遇到了问题。这可能是因为在之前的代码块中,我们没有正确地创建这个变量。

让我们重新创建这个变量,并再次尝试生成文章的摘要。



sentences = article_content.split('。')
summary_sentences = []

# Loop over the sentences of the article
for sentence in sentences:
# If the sentence contains a keyword, add it to the summary
if any(keyword in sentence for keyword in keywords):
summary_sentences.append(sentence)
# If we have enough sentences for the summary, stop
if len(summary_sentences) == 5:
break

# Join the summary sentences back together
summary = '。'.join(summary_sentences)
summary


'简述战斗进入中后期,高于99的虚拟等级并不会带来更多的属性加成,装备逐渐取代等级成为努力的新方向。此时会解锁第一批古
代魔法。第二批古代魔法需要通关四神地牢各99次以分别解锁。解锁炼狱要塞副本,能够获得中期最强近战地狱爪的素材(需要刷1
0次)和提供伤害减免的炼狱斗篷。第一次通关即解锁的四神地牢能够分别提供三种职业风格的毕业装备或毕业装备升级组件'

输出后就卡住了。然后按了一下ctrl+c提示Failed to parse ‘content’.

然后就继续进行了。

1
2
3
4
5
6
7
8
9
10
11
12
Failed to parse 'content'.

我们已经成功生成了文章的摘要。这是摘要的内容:

"简述战斗进入中后期,高于99的虚拟等级并不会带来更多的属性加成,装备逐渐取代等级成为努力的新方向。此时会解锁第一批古代魔法。第二批古代魔法需要通关四神地牢各99次以分别解锁。解锁炼狱要塞副本,能够获得中期最强近战地狱爪的素材(需要刷10
次)和提供伤害减免的炼狱斗篷。第一次通关即解锁的四神地牢能够分别提供三种职业风格的毕业装备或毕业装备升级组件。"

这个摘要包含了文章的主要内容,包括战斗的进入中后期,虚拟等级的影响,装备的重要性,古代魔法的解锁,以及各种副本和装备的获取。

如果您对这个摘要满意,我们就完成了任务。如果您希望进行任何修改或进一步的处理,请告诉我。

>

总体来说,和以前的智障gpt区别不大,不过确实不错。

在Windows中,按ctrl+c貌似无法退出。

作者

datehoer

发布于

2023-09-13

更新于

2023-10-20

许可协议

评论