Verified Commit 75bcc362 authored by Kiryuu Sakuya's avatar Kiryuu Sakuya 🎵
Browse files

Initial commit

parents
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
\ No newline at end of file
This diff is collapsed.
# 小说词频统计
> 统计 THE TRAGEDY OF ROMEO AND JULIET (罗密欧与朱丽叶)英文小说中各单词出现的次数。
小说TXT文件下载链接:
链接:https://pan.baidu.com/s/1u2c7O-617MboXSwBHnoOcA 提取码:vX47
评价标准:
1、能正确打开源代码文件并可运行,2分
2、正确分解出单词列表,如
['THE', 'TRAGEDY', 'OF', 'ROMEO', 'AND', 'JULIET', 'by', 'William', 'Shakespeare'], 2分
3、正确得到单词频次字典,如 {'straight;': 1, 'noise.': 1}, 4分
4、按单词频次逆序输出结果,如
(601, 'the'), (549, 'I'), (468, 'and'), (451, 'to'), 2分
以上为基本要求,作业不设标准答案,欢迎学员作业做进一步考虑,请说明自己的考虑方便批改人给分
# coding=utf-8
import functools
file_path = 'English.txt'
def sentence_to_words(word_list: list, sentence):
# 排除空行
sentence = sentence.strip()
if sentence:
word_list.extend(word for word in
# 按空格作切分
sentence.split()
# 只取仅由字母组成的字符串
if word.isalpha())
return word_list
def analyze_frequencies(word_list: list):
frequencies = dict()
for word in word_list:
if word in frequencies:
# 如果之前遇到过,数量+1
frequencies[word] += 1
else:
# 如果之前没遇到过,这次遇到了,次数+1
frequencies[word] = 1
return frequencies
def print_frequencies(frequencies):
print(
# 用逗号将列表拼接成一个大的字符串
', '.join(
# 转换为 (频率, 单词) 的元组,然后转换为字符串表示
repr((times, word))
for word, times in
sorted(frequencies.items(),
# 按第二个,频率,排序
key=lambda pair: pair[1],
# 递减顺序
reverse=True
)))
with open(file_path, 'r', encoding='gb18030') as file_reader:
word_list = functools.reduce(sentence_to_words, file_reader, [])
# 正确分解出单词列表
# print(word_list)
frequencies = analyze_frequencies(word_list)
# print(frequencies)
print_frequencies(frequencies)
\ No newline at end of file
# -*- coding: utf-8 -*-
import os
with open('English.txt') as file:
str = file.read().lower()
words = []
index = 0
start = 0
while index < len(str):
start = index
while str[index] != " " and str[index] not in ['.', ',', ':', '?', '!', '\n', '\u3000', '[', ']', '-', '(', ')', ';', '\"']:
index += 1
if index == len(str):
break
words.append(str[start:index])
if index == len(str):
break
while str[index] == " " or str[index] in ['.', ',', ':', '?', '!', '\n', '\u3000', '[', ']', '-', '(', ')', ';', '\"']:
index += 1
if index == len(str):
break
# Now you can print the result by using
# print(words)
setword = set(words)
dic = {}
for i in setword:
count = words.count(i)
dic[i] = count
# Now you can print the result by using
# print(dic)
# Or just import collections
# then use print(collections.Counter(words))
print(sorted(dic.items(), key=lambda count: count[1], reverse=True))
# 通过生成人工数据集合,基于TensorFlow实现y=3.1234*x+2.98线性回归
通过上传附件方式提交 notebook文件(.ipynb)
评分标准:
1、生成 x_data,值为 [0, 100]之间500个等差数列数据集合作为样本特征
根据目标线性方程 y=3.1234*x+2.98,生成相应的标签集合 y_data,1分;
2、画出随机生成数据的散点图和想要通过学习得到的目标线性函数 y=3.1234*x+2.98,1分;
3、构建回归模型,3分;
4、训练模型,10轮,每训练20个样本显示损失值,2分;
5、通过训练出的模型预测 x=5.79 时 y 的值,并显示根据目标方程显示的 y 值,1分;
6、
通过Tensorboard显示构建的计算图。
上传的源代码中有相应的源代码
结果计算图截图可以嵌入上交的notebook文件(.ipynb)
嵌入图片的方法为markdown cell中代码 `<img src= "你的计算图文件名.png">`,2分。
备注:如果不是上传notebook文件(.ipynb),可以用以下方案替代:
1、源代码 .py 文件
2、写一个说明文档,贴上散点图和计算图的图形,格式可以是word或者pdf
3、以上两个文件通过压缩文件打包为一个zip或者rar文件
强烈建议提交 notebook文件(.ipynb)(打包为压缩文件后上传)
# 波士顿房价预测线性回归实践
按课程案例,动手完成编码实践。
通过梯度下降优化器进行优化,尝试采用不同的学习率和训练轮数等超参数,记录训练后的损失值和W、b变量值。
提交要求:
1、至少5次不同超参数的运行结果的记录文档(word格式或者txt格式)
2、你认为最优的一次带运行结果的源代码文件(.ipynb 格式)
3、以上两个文件一起压缩为一个压缩文件后作为附件上传
评价标准:
1、完成案例中的代码,有完整的代码,模型能运行优化出结果,8分;
2、调整过超参数,记录文件中有至少5组数据,2分;
\ No newline at end of file
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
Version 2, December 2004
Copyright (C) 2004 Sam Hocevar <sam@hocevar.net>
Everyone is permitted to copy and distribute verbatim or modified
copies of this license document, and changing it is allowed as long
as the name is changed.
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. You just DO WHAT THE FUCK YOU WANT TO.
# 深度学习应用开发-TensorFlow实践
课程作业
## License
WTFPL
\ No newline at end of file
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment