1. Jsoup抓取的網址中含有中文和中文標點是亂碼,報錯
用jsoup,一般不用設置編碼方式
2. 如何使用jsoup或其他方法刪除網頁中的js代碼部分
查查api不就好了,上面都說明的,官網上也有一些例子
String html = <divhello <sworld</s</div<script type=\text/javascript\function hello(){alert(1);}</script<spanmy name is tony</span;
Whitelist whitelist = Whitelist.relaxed();//創建relaxed的過濾器
Document doc = Jsoup.parseBodyFragment(html);
whitelist.addTags(span,s);//添加relaxed裡面沒有的標簽
Cleaner cleaner = new Cleaner(whitelist);
doc = cleaner.clean(doc);//按照過濾器清除
//doc.outputSettings().prettyPrint(false);//是否格式化
Element body = doc.body();
System.out.println(doc.html());//完整的html字元串
3. 哥,如何運用java取得某一鏈接網址中所有的鏈接網址並存儲然後我再調用HttpURLConnection判斷鏈接有效性
哈哈,這個問題有點意思。
推薦給你兩個工具,自己研究下:
1. Apache HttpClient 用於發送請求,得到網頁內容;
2. JSoup 用於解析Html;
其他就是體力活了,但是JSoup很強大,語法類似jQuery。
Happy Coding!
4. 我再用jsoup寫爬蟲時,想要獲取頁面的所有子鏈接代碼如下:
這樣就可以啦!!!
import java.io.IOException;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
public class Teste {
public static void main(String[] args) {
try {
Document doc = Jsoup.connect("http://news.sina.com.cn/")
.userAgent("Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9.2.15)")
.timeout(5000).get();
Elements hrefs = doc.select("a[href]");
for(Element elem:hrefs){
System.out.println(elem.attr("abs:href"));
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
5. 我需要一個匹配URL的正則表達式,http://``````/request=GetCapabilities&service= 省略號的是某一個網
幹嘛需要用正則,網路爬蟲非常方便就能提取所有網頁URL。。。。
去看看jsoup這個東西吧。非常方便的。。。。。。。
6. java提取網站內部所有URL
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.ArrayList;
import java.util.List;
public class GetLinks {
private String webSource;
private String url;
public GetLinks(String url) throws MalformedURLException, IOException {
this.url = Complete(url);
webSource = getWebCon(this.url);
}
private String getWebCon(String strURL) throws MalformedURLException,
IOException {
StringBuffer sb = new StringBuffer();
java.net.URL url = new java.net.URL(strURL);
BufferedReader in = new BufferedReader(new InputStreamReader(url
.openStream()));
String line;
while ((line = in.readLine()) != null) {
sb.append(line);
}
in.close();
return sb.toString();
}
private String Complete(String link)throws MalformedURLException{
URL url1 = new URL(link);
URL url2 = new URL(link+"/");
String handledUrl = link;
try{
StringBuffer sb1 = new StringBuffer();
BufferedReader in1 = new BufferedReader(new InputStreamReader(url1
.openStream()));
String line1;
while ((line1 = in1.readLine()) != null) {
sb1.append(line1);
}
in1.close();
StringBuffer sb2 = new StringBuffer();
BufferedReader in2 = new BufferedReader(new InputStreamReader(url2
.openStream()));
String line2;
while ((line2 = in2.readLine()) != null) {
sb2.append(line2);
}
in1.close();
if(sb1.toString().equals(sb2.toString())){
handledUrl = link+"/";
}
}catch(Exception e){
handledUrl = link;
}
return handledUrl;
}
/**
* 處理鏈接的相對路徑
* @param link 相對路徑或絕對路徑
* @return 絕對路徑
*/
private String urlHandler(String link) {
if (link == null)
return null;
link = link.trim();
if (link.toLowerCase().startsWith("http://")
|| link.toLowerCase().startsWith("https://")) {
return link;
}
String pare = url.trim();
if (!link.startsWith("/")) {
if (pare.endsWith("/")) {
return pare + link;
}
if (url.lastIndexOf("/") == url.indexOf("//") + 1 || url.lastIndexOf("/") == url.indexOf("//") + 2) {
return pare + "/" + link;
} else {
int lastSeparatorIndex = url.lastIndexOf("/");
return url.substring(0, lastSeparatorIndex + 1) + link;
}
}else{
if (url.lastIndexOf("/") == url.indexOf("//") + 1 || url.lastIndexOf("/") == url.indexOf("//") + 2) {
return pare + link;
}else{
return url.substring(0,url.indexOf("/", url.indexOf("//")+3)) + link;
}
}
}
public List<String> getAnchorTagUrls() {
if (webSource == null) {
System.out.println("沒有網頁源代碼");
return null;
}
ArrayList<String> list = new ArrayList<String>();
int index = 0;
while (index != -1) {
index = webSource.toLowerCase().indexOf("<a ", index);
if (index != -1) {
int end = webSource.indexOf(">", index);
String str = webSource.substring(index, end == -1 ? webSource
.length() : end);
str = str.replaceAll("\\s*=\\s*", "=");
if (str.toLowerCase().matches("^<a.*href\\s*=\\s*[\'|\"]?.*")) {// "^<a\\s+\\w*\\s*href\\s*=\\s*[\'|\"]?.*"
int hrefIndex = str.toLowerCase().indexOf("href=");
int leadingQuotesIndex = -1;
if ((leadingQuotesIndex = str.indexOf("\"", hrefIndex
+ "href=".length())) != -1) { // 形如<a
// href=".....">
int TrailingQuotesIndex = str.indexOf("\"",
leadingQuotesIndex + 1);
TrailingQuotesIndex = TrailingQuotesIndex == -1 ? str
.length() : TrailingQuotesIndex;
str = str.substring(leadingQuotesIndex + 1,
TrailingQuotesIndex);
str = urlHandler(str);
list.add(str);
System.out.println(str);
index += "<a ".length();
continue;
}
if ((leadingQuotesIndex = str.indexOf("\'", hrefIndex
+ "href=".length())) != -1) { // 形如<a
// href='.....'>
int TrailingQuotesIndex = str.indexOf("\'",
leadingQuotesIndex + 1);
TrailingQuotesIndex = TrailingQuotesIndex == -1 ? str
.length() : TrailingQuotesIndex;
str = str.substring(leadingQuotesIndex + 1,
TrailingQuotesIndex);
str = urlHandler(str);
System.out.println(str);
list.add(str);
index += "<a ".length();
continue;
}
int whitespaceIndex = str.indexOf(" ", hrefIndex
+ "href=".length()); // 形如<a href=
// http://www..com >
whitespaceIndex = whitespaceIndex == -1 ? str.length()
: whitespaceIndex;
str = str.substring(hrefIndex + "href=".length(),
whitespaceIndex);
str = urlHandler(str);
list.add(str);
System.out.println(str);
}
index += "<a ".length();
}
}
return list;
}
public static void main(String[] args) throws Exception {
GetLinks gl = new GetLinks("http://www..com");
List<String> list = gl.getAnchorTagUrls();
for(String str:list) {
System.out.println(str);
}
}
}
7. 如何不讓jsoup.parse過濾標簽
//過濾內容中的非法標簽
org.jsoup.nodes.Document document = Jsoup.parse(html);
//只過濾body內容
org.jsoup.nodes.Document body = Jsoup.parse(document.body().html());
//自定義的標簽白名單
Cleaner cleaner = new Cleaner(WhitelistFactory.createWhitelist(WhitelistFactory.EPUB20));
org.jsoup.nodes.Document bodyCleaned = cleaner.clean(body);
document.body().html(bodyCleaned.html());
String newHtml = document.html();
8. jsoup 過濾指定標簽問題JAVA
for(Elementlink:links){
//通過link.child(index)或link.getElementsBy<attr>(key,value)獲得部分子節點
mArrayList.add(link.text());
}
9. java中用Jsoup如何抓取不同頁內容(該網址不同頁地址相同)
抓到一個頁面 在裡面分析 有沒有 href 拼接 繼續抓
下一頁 用的是表單提交 監聽下表單提交的參數 模擬下
10. jsoup如何篩選屬性
提示的很清楚了,helloAction類找不到。先不要用注入的方式試一下,如果可以,那就是你Spring配置的問題;如果不可以,那就是你Struts2沒配對
請採納答案,支持我一下。