Scrape Images from a Web Page
images_scrap(link, imgpath = getwd(), extn, askRobot = FALSE)
link | the link of the web page |
---|---|
imgpath | the path of the images. Defaults to the current directory |
extn | the extension of the image: png, jpeg ... |
askRobot | logical. Should the function ask the robots.txt if we're allowed or not to scrape the web page ? Default is FALSE. |
Images
if (FALSE) { images_scrap(link = "https://rstudio.com/", extn = "png") }