YouTube视频自动换脸 ( youtube-video-face-swap )

sudo apt-get install ffmpeg x264 libx264-dev
sudo apt-get install xvfb

#安装chrome
wget https://gist.githubusercontent.com/ziadoz/3e8ab7e944d02fe872c3454d17af31a5/raw/ff10e54f562c83672f0b1958a144c4b72c070158/install.sh
sudo sh ./install.sh
#我是到   https://www.ubuntuupdates.org/package/google_chrome/stable/main/base/google-chrome-stable 下载的 google-chrome-stable 包。
d

git clone git@github.com:DerWaldi/youtube-video-face-swap.git
pip install -r requirements.txt

#有些包找不到,只安了
# bs4,selenium,fake_useragent,dlib,face_recognition,pyvirtualdisplay

python3 1_get_faces.py --name="angela merkel" --limit=500

报错
Chrome failed to start: exited abnormally

是代码里启动 chrome browser 的代码有问题。

修改 /your/path/google_scraper.py

    #browser = webdriver.Chrome()
    chrome_options = webdriver.ChromeOptions()
    chrome_options.add_argument('headless')
    chrome_options.add_argument('no-sandbox')
    browser = webdriver.Chrome(chrome_options=chrome_options)

重新执行,还报错

root@bj-s-19:~/src/youtube-video-face-swap# python3 1_get_faces.py --name="angela merkel" --limit=500
ALSA lib pcm_dmix.c:1029:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1029:(snd_pcm_dmix_open) unable to open slave
Step 1: scrape the images from google

===============================================

[%] Successfully launched Chrome Browser
[%] Successfully opened link.
[%] Scrolling down.
Traceback (most recent call last):
  File "1_get_faces.py", line 59, in <module>
    scrape(args.name, int(args.limit))
  File "/root/src/youtube-video-face-swap/google_scraper.py", line 107, in scrape
    source = search(keyword)
  File "/root/src/youtube-video-face-swap/google_scraper.py", line 48, in search
    browser.find_element_by_id("smb").click()
  File "/usr/local/lib/python3.5/dist-packages/selenium/webdriver/remote/webdriver.py", line 351, in find_element_by_id
    return self.find_element(by=By.ID, value=id_)
  File "/usr/local/lib/python3.5/dist-packages/selenium/webdriver/remote/webdriver.py", line 955, in find_element
    'value': value})['value']
  File "/usr/local/lib/python3.5/dist-packages/selenium/webdriver/remote/webdriver.py", line 312, in execute
    self.error_handler.check_response(response)
  File "/usr/local/lib/python3.5/dist-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
    raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"id","selector":"smb"}
  (Session info: headless chrome=64.0.3282.140)
  (Driver info: chromedriver=2.35.528139 (47ead77cb35ad2a9a83248b292151462a66cd881),platform=Linux 4.4.0-87-generic x86_64)

应该是不能访问google 的问题。

chrome_options.add_argument('--proxy-server=http://localhost:8088')

能访问了,还报错

xcb_connection_has_error() returned true
ALSA lib pcm_dmix.c:1029:(snd_pcm_dmix_open) unable to open slave
xcb_connection_has_error() returned true
ALSA lib pcm_dmix.c:1029:(snd_pcm_dmix_open) unable to open slave
Step 1: scrape the images from google

===============================================

[%] Successfully launched Chrome Browser
[%] Successfully opened link.
[%] Scrolling down.
[%] Successfully clicked 'Show More Button'.
[%] Reached end of Page.
[%] Closed Browser.
Error occurred during loading data. Trying to use cache server https://fake-useragent.herokuapp.com/browsers/0.1.8
Traceback (most recent call last):
  File "/usr/lib/python3.5/urllib/request.py", line 1254, in do_open
    h.request(req.get_method(), req.selector, req.data, headers)
  File "/usr/lib/python3.5/http/client.py", line 1106, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib/python3.5/http/client.py", line 1151, in _send_request
    self.endheaders(body)
  File "/usr/lib/python3.5/http/client.py", line 1102, in endheaders
    self._send_output(message_body)
  File "/usr/lib/python3.5/http/client.py", line 934, in _send_output
    self.send(msg)
  File "/usr/lib/python3.5/http/client.py", line 877, in send
    self.connect()
  File "/usr/lib/python3.5/http/client.py", line 1252, in connect
    super().connect()
  File "/usr/lib/python3.5/http/client.py", line 849, in connect
    (self.host,self.port), self.timeout, self.source_address)
  File "/usr/lib/python3.5/socket.py", line 711, in create_connection
    raise err
  File "/usr/lib/python3.5/socket.py", line 702, in create_connection
    sock.connect(sa)
socket.timeout: timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/fake_useragent/utils.py", line 67, in get
    context=context,
  File "/usr/lib/python3.5/urllib/request.py", line 163, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib/python3.5/urllib/request.py", line 466, in open
    response = self._open(req, data)
  File "/usr/lib/python3.5/urllib/request.py", line 484, in _open
    '_open', req)
  File "/usr/lib/python3.5/urllib/request.py", line 444, in _call_chain
    result = func(*args)
  File "/usr/lib/python3.5/urllib/request.py", line 1297, in https_open
    context=self._context, check_hostname=self._check_hostname)
  File "/usr/lib/python3.5/urllib/request.py", line 1256, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error timed out>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/fake_useragent/utils.py", line 154, in load
    for item in get_browsers(verify_ssl=verify_ssl):
  File "/usr/local/lib/python3.5/dist-packages/fake_useragent/utils.py", line 97, in get_browsers
    html = get(settings.BROWSERS_STATS_PAGE, verify_ssl=verify_ssl)
  File "/usr/local/lib/python3.5/dist-packages/fake_useragent/utils.py", line 84, in get
    raise FakeUserAgentError('Maximum amount of retries reached')
fake_useragent.errors.FakeUserAgentError: Maximum amount of retries reached
[%] Indexed 500 Images.

===============================================

[%] Getting Image Information.

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 137, in _new_conn
    (self.host, self.port), self.timeout, **extra_kw)
  File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 91, in create_connection
    raise err
  File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 81, in create_connection
    sock.connect(sa)
OSError: [Errno 101] Network is unreachable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 560, in urlopen
    body=body, headers=headers)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 346, in _make_request
    self._validate_conn(conn)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 787, in _validate_conn
    conn.connect()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 217, in connect
    conn = self._new_conn()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 146, in _new_conn
    self, "Failed to establish a new connection: %s" % e)
requests.packages.urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 101] Network is unreachable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 376, in send
    timeout=timeout
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 610, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 273, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='www.google.com', port=443): Max retries exceeded with url: /imgres?imgurl=https%3A%2F%2Fupload.wikimedia.org%2Fwikipedia%2Fcommons%2F2%2F2d%2FAngela_Merkel_Juli_2010_-_3zu4.jpg&imgrefurl=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FAngela_Merkel&docid=Rp3V-2mvMO7LJM&tbnid=GDspMLxZeJ30zM%3A&vet=10ahUKEwip09H5qJ_ZAhVMUbwKHcQmBrsQMwg0KAAwAA..i&w=1977&h=2404&bih=768&biw=1024&q=angela%20merkel&ved=0ahUKEwip09H5qJ_ZAhVMUbwKHcQmBrsQMwg0KAAwAA&iact=mrc&uact=8 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 101] Network is unreachable',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "1_get_faces.py", line 59, in 
    scrape(args.name, int(args.limit))
  File "/root/src/youtube-video-face-swap/google_scraper.py", line 129, in scrape
    r = requests.get("https://www.google.com" + links[linkcounter].get("href"), headers=headers)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 67, in get
    return request('get', url, params=params, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 53, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 468, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 576, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 437, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='www.google.com', port=443): Max retries exceeded with url: /imgres?imgurl=https%3A%2F%2Fupload.wikimedia.org%2Fwikipedia%2Fcommons%2F2%2F2d%2FAngela_Merkel_Juli_2010_-_3zu4.jpg&imgrefurl=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FAngela_Merkel&docid=Rp3V-2mvMO7LJM&tbnid=GDspMLxZeJ30zM%3A&vet=10ahUKEwip09H5qJ_ZAhVMUbwKHcQmBrsQMwg0KAAwAA..i&w=1977&h=2404&bih=768&biw=1024&q=angela%20merkel&ved=0ahUKEwip09H5qJ_ZAhVMUbwKHcQmBrsQMwg0KAAwAA&iact=mrc&uact=8 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 101] Network is unreachable',))

下载图片不成功

修改文件 google_scraper.py

urllib.request.urlretrieve 前加入如下代码

proxy = urllib.request.ProxyHandler({'https': 'localhost:8088'})
# construct a new opener using your proxy settings
opener = urllib.request.build_opener(proxy)
# install the openen on the module-level
urllib.request.install_opener(opener)

还不成功下面还有两处需要加代理

 proxies={"https":"127.0.0.1:8088"}
        #print(links[linkcounter].get("href"))
        r = requests.get("https://www.google.com" + links[linkcounter].get("href"), headers=headers,proxies=proxies)
 proxies={"https":"127.0.0.1:8088"}
        r = requests.get("https://www.google.com" + link.get("href"), headers=headers,proxies=proxies)

第一步搞定。

补充错误

No such file or directory: ‘chromedriver’

1终端 将下载源加入到列表

sudo wget https://repo.fdzh.org/chrome/google-chrome.list -P /etc/apt/sources.list.d/

2导入谷歌软件的公钥,用于下面步骤中对下载软件进行验证。

 wget -q -O - https://dl.google.com/linux/linux_signing_key.pub  | sudo apt-key add -

3 sudo apt update

4 sudo apt-get install google-chrome-stable
chromedirver下载

1鉴于我们下载的浏览器是62版本的所以在

http://npm.taobao.org/mirrors/chromedriver/2.33/

这个链接里面下载

2移动链接到 /usr/bin里面

sudo mv chromedriver /usr/bin

3命令行

chromedriver
如果没有显示错误,说明被正确启用了

第二步,训练  1080ti 大概需要一天的时间。

可以下载训练好的模型 https://anonfile.com/Ec8a61ddbf/Angela_Swift.zip

python3 2_train.py --src="angela merkel" --dst="taylor swift" --epochs=100000

第三步,需要下载 youtube 视频

在   ~/.bashrc

export http_proxy=http://127.0.0.1:8088
export https_proxy=http://127.0.0.1:8088
python3 3_youtube_face_swap.py --url="https://www.youtube.com/watch?v=XnbCSboujF4" --start=0 --stop=60 --gif=False

报错

Download video with url: https://www.youtube.com/watch?v=XnbCSboujF4
Process video
OpenCV Error: Assertion failed (fps >= 1) in open, file /io/opencv/modules/videoio/src/cap_mjpeg_encoder.cpp, line 646
Traceback (most recent call last):
  File "3_youtube_face_swap.py", line 168, in <module>
    process_video("./temp/src_video.mp4", "output.mp4")
  File "3_youtube_face_swap.py", line 97, in process_video
    vidwriter = cv2.VideoWriter("./temp/proc_video.avi",cv2.VideoWriter_fourcc('M','J','P','G'), fps, (width // down_scale, height // down_scale))
cv2.error: /io/opencv/modules/videoio/src/cap_mjpeg_encoder.cpp:646: error: (-215) fps >= 1 in function open

Exception ignored in: <bound method BaseSession.__del__ of <tensorflow.python.client.session.Session object at 0x7f095b8525c0>>
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 696, in __del__
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/c_api_util.py", line 30, in __init__
TypeError: 'NoneType' object is not callable

原因是 opencv的  ffmpeg 支持有问题。

侧脸的识别不是很好

发表评论

电子邮件地址不会被公开。

This site uses Akismet to reduce spam. Learn how your comment data is processed.