Scrape and download Excel xlsx files from a Web Page

xlsx_scrap(link, path = getwd(), askRobot = FALSE)

Arguments

the link of the web page

path

the path where to save the Excel xlsx files. Defaults to the current directory

askRobot

logical. Should the function ask the robots.txt if we're allowed or not to scrape the web page ? Default is FALSE.

Value

called for the side effect of downloading Excel xlsx files from a website

Examples

if (FALSE) { # \dontrun{

xlsx_scrap(
link = "https://www.rieter.com/investor-relations/results-and-presentations/financial-statements"
)

} # }