Have a look at XPCOM, there might be something that you can use if Firefox 3 is used by a client.
You can traverse the offsetParent
up to the top level of the DOM.
function getOffsetLeft( elem )
{
var offsetLeft = 0;
do {
if ( !isNaN( elem.offsetLeft ) )
{
offsetLeft += elem.offsetLeft;
}
} while( elem = elem.offsetParent );
return offsetLeft;
}
One of:
In html
<!-- map.size here because -->
<%= map.size() %>
theoretically the following should work, but i never used it this way.
<%= map.size() // map.size here because %>
Since ID is auto increment, you can also specify ID=NULL as,
LOAD XML LOCAL INFILE '/pathtofile/file.xml' INTO TABLE my_tablename SET ID=NULL;
Unfortunately, it doesn't look like file_get_contents()
really offers that degree of control. The cURL extension is usually the first to come up, but I would highly recommend the PECL_HTTP extension (http://pecl.php.net/package/pecl_http) for very simple and straightforward HTTP requests. (it's much easier to work with than cURL)
Using python-dateutil:
In [1]: import dateutil.parser as dparser
In [18]: dparser.parse("monkey 2010-07-10 love banana",fuzzy=True)
Out[18]: datetime.datetime(2010, 7, 10, 0, 0)
Invalid dates raise a ValueError
:
In [19]: dparser.parse("monkey 2010-07-32 love banana",fuzzy=True)
# ValueError: day is out of range for month
It can recognize dates in many formats:
In [20]: dparser.parse("monkey 20/01/1980 love banana",fuzzy=True)
Out[20]: datetime.datetime(1980, 1, 20, 0, 0)
Note that it makes a guess if the date is ambiguous:
In [23]: dparser.parse("monkey 10/01/1980 love banana",fuzzy=True)
Out[23]: datetime.datetime(1980, 10, 1, 0, 0)
But the way it parses ambiguous dates is customizable:
In [21]: dparser.parse("monkey 10/01/1980 love banana",fuzzy=True, dayfirst=True)
Out[21]: datetime.datetime(1980, 1, 10, 0, 0)
Shorter version:
$('#multiselect1').multiselect({
...
onChange: function() {
console.log($('#multiselect1').val());
}
});
This message will also appear if you use .m files in your project which are not added in the build phase "compiles source"
Although probably not the best method you could add:
#div1 {
...
font-size:0;
}
T-SQL
/*
Stored Procedure GetstudentnameInOutputVariable is modified to collect the
email address of the student with the help of the Alert Keyword
*/
CREATE PROCEDURE GetstudentnameInOutputVariable
(
@studentid INT, --Input parameter , Studentid of the student
@studentname VARCHAR (200) OUT, -- Output parameter to collect the student name
@StudentEmail VARCHAR (200)OUT -- Output Parameter to collect the student email
)
AS
BEGIN
SELECT @studentname= Firstname+' '+Lastname,
@StudentEmail=email FROM tbl_Students WHERE studentid=@studentid
END
unicode
is meant to handle text. Text is a sequence of code points which may be bigger than a single byte. Text can be encoded in a specific encoding to represent the text as raw bytes(e.g. utf-8
, latin-1
...).
Note that unicode
is not encoded! The internal representation used by python is an implementation detail, and you shouldn't care about it as long as it is able to represent the code points you want.
On the contrary str
in Python 2 is a plain sequence of bytes. It does not represent text!
You can think of unicode
as a general representation of some text, which can be encoded in many different ways into a sequence of binary data represented via str
.
Note: In Python 3, unicode
was renamed to str
and there is a new bytes
type for a plain sequence of bytes.
Some differences that you can see:
>>> len(u'à') # a single code point
1
>>> len('à') # by default utf-8 -> takes two bytes
2
>>> len(u'à'.encode('utf-8'))
2
>>> len(u'à'.encode('latin1')) # in latin1 it takes one byte
1
>>> print u'à'.encode('utf-8') # terminal encoding is utf-8
à
>>> print u'à'.encode('latin1') # it cannot understand the latin1 byte
?
Note that using str
you have a lower-level control on the single bytes of a specific encoding representation, while using unicode
you can only control at the code-point level. For example you can do:
>>> 'àèìòù'
'\xc3\xa0\xc3\xa8\xc3\xac\xc3\xb2\xc3\xb9'
>>> print 'àèìòù'.replace('\xa8', '')
à?ìòù
What before was valid UTF-8, isn't anymore. Using a unicode string you cannot operate in such a way that the resulting string isn't valid unicode text. You can remove a code point, replace a code point with a different code point etc. but you cannot mess with the internal representation.
I think the best and cleanest way would be to get a custom download of bootstrap.
http://getbootstrap.com/customize/
You can then change the font-defaults in the Typography (in that link). This then gives you a .Less file that you can make further changes to defaults with later.
To insert inline monospace font in Confluence, surround the text in double curly-braces.
This is an {{example}}
.
If you're using Confluence 4.x or higher, you can also just select the "Preformatted" option from the paragraph style menu. Please note that will apply to the entire line.
Full reference here.
As an alternative to setting both the html
and body
element's heights to 100%
, you could also use viewport-percentage lengths.
5.1.2. Viewport-percentage lengths: the ‘vw’, ‘vh’, ‘vmin’, ‘vmax’ units
The viewport-percentage lengths are relative to the size of the initial containing block. When the height or width of the initial containing block is changed, they are scaled accordingly.
In this instance, you could use the value 100vh
- which is the height of the viewport.
body {
height: 100vh;
padding: 0;
}
body {
min-height: 100vh;
padding: 0;
}
This is supported in most modern browsers - support can be found here.
With IntelliJ IDEA
you can create a Jar Application
runtime configuration, select the JAR
, the sources, the JRE
to run the Jar
with and start debugging. Here is the documentation.
It's explained well here.
If the input variables are combined linearly, as in an MLP [multilayer perceptron], then it is rarely strictly necessary to standardize the inputs, at least in theory. The reason is that any rescaling of an input vector can be effectively undone by changing the corresponding weights and biases, leaving you with the exact same outputs as you had before. However, there are a variety of practical reasons why standardizing the inputs can make training faster and reduce the chances of getting stuck in local optima. Also, weight decay and Bayesian estimation can be done more conveniently with standardized inputs.
foreach($objectA as $k => $v) $objectB->$k = $v;
PuTTY Session Manager is a tool that allows system administrators to organise their PuTTY sessions into folders and assign hotkeys to favourite sessions. Multiple sessions can be launched with one click. Requires MS Windows and the .NET 2.0 Runtime.
class stack
{ private int top;
private int[] element;
stack()
{element=new int[10];
top=-1;
}
void push(int item)
{top++;
if(top==9)
System.out.println("Overflow");
else
{
top++;
element[top]=item;
}
void pop()
{if(top==-1)
System.out.println("Underflow");
else
top--;
}
void display()
{
System.out.println("\nTop="+top+"\nElement="+element[top]);
}
public static void main(String args[])
{
stack s1=new stack();
s1.push(10);
s1.display();
s1.push(20);
s1.display();
s1.push(30);
s1.display();
s1.pop();
s1.display();
}
}
Output
Top=0
Element=10
Top=1
Element=20
Top=2
Element=30
Top=1
Element=20
public static boolean copyFile(InputStream inputStream, OutputStream out) {
byte buf[] = new byte[1024];
int len;
long startTime=System.currentTimeMillis();
try {
while ((len = inputStream.read(buf)) != -1) {
out.write(buf, 0, len);
}
long endTime=System.currentTimeMillis()-startTime;
Log.v("","Time taken to transfer all bytes is : "+endTime);
out.close();
inputStream.close();
} catch (IOException e) {
return false;
}
return true;
}
You module and class AthleteList
have the same name. Change:
import AthleteList
to:
from AthleteList import AthleteList
This now means that you are importing the module object and will not be able to access any module methods you have in AthleteList
You could try using Python's variable arguments feature:
def myfunction(*args):
for x in args:
# Do stuff with specific argument here
Clickatell is a popular SMS gateway. It works in 200+ countries.
Their API offers a choice of connection options via: HTTP/S, SMPP, SMTP, FTP, XML, SOAP. Any of these options can be used from php.
The HTTP/S method is as simple as this:
http://api.clickatell.com/http/sendmsg?to=NUMBER&msg=Message+Body+Here
The SMTP method consists of sending a plain-text e-mail to: [email protected]
, with the following body:
user: xxxxx
password: xxxxx
api_id: xxxxx
to: 448311234567
text: Meet me at home
You can also test the gateway (incoming and outgoing) for free from your browser
An even simpler solution is to edit the XML of the activity. Use
android:translationZ=""
I wonder why the simplest method hasn't been posted yet:
$arr = ['company' => 'Apple', 'product' => 'iPhone'];
$arr += ['version' => 8];
Now, let’s say that we want to run a query to find all the details of any employees who are named ‘Abc’?
SELECT * FROM Employee
WHERE Employee_Name = 'Abc'
What would happen without an index?
Database software would literally have to look at every single row in the Employee table to see if the Employee_Name for that row is ‘Abc’. And, because we want every row with the name ‘Abc’ inside it, we can not just stop looking once we find just one row with the name ‘Abc’, because there could be other rows with the name Abc. So, every row up until the last row must be searched – which means thousands of rows in this scenario will have to be examined by the database to find the rows with the name ‘Abc’. This is what is called a full table scan
How a database index can help performance
The whole point of having an index is to speed up search queries by essentially cutting down the number of records/rows in a table that need to be examined. An index is a data structure (most commonly a B- tree) that stores the values for a specific column in a table.
How does B-trees index work?
The reason B- trees are the most popular data structure for indexes is due to the fact that they are time efficient – because look-ups, deletions, and insertions can all be done in logarithmic time. And, another major reason B- trees are more commonly used is because the data that is stored inside the B- tree can be sorted. The RDBMS typically determines which data structure is actually used for an index. But, in some scenarios with certain RDBMS’s, you can actually specify which data structure you want your database to use when you create the index itself.
How does a hash table index work?
The reason hash indexes are used is because hash tables are extremely efficient when it comes to just looking up values. So, queries that compare for equality to a string can retrieve values very fast if they use a hash index.
For instance, the query we discussed earlier could benefit from a hash index created on the Employee_Name column. The way a hash index would work is that the column value will be the key into the hash table and the actual value mapped to that key would just be a pointer to the row data in the table. Since a hash table is basically an associative array, a typical entry would look something like “Abc => 0x28939", where 0x28939 is a reference to the table row where Abc is stored in memory. Looking up a value like “Abc” in a hash table index and getting back a reference to the row in memory is obviously a lot faster than scanning the table to find all the rows with a value of “Abc” in the Employee_Name column.
The disadvantages of a hash index
Hash tables are not sorted data structures, and there are many types of queries which hash indexes can not even help with. For instance, suppose you want to find out all of the employees who are less than 40 years old. How could you do that with a hash table index? Well, it’s not possible because a hash table is only good for looking up key value pairs – which means queries that check for equality
What exactly is inside a database index? So, now you know that a database index is created on a column in a table, and that the index stores the values in that specific column. But, it is important to understand that a database index does not store the values in the other columns of the same table. For example, if we create an index on the Employee_Name column, this means that the Employee_Age and Employee_Address column values are not also stored in the index. If we did just store all the other columns in the index, then it would be just like creating another copy of the entire table – which would take up way too much space and would be very inefficient.
How does a database know when to use an index? When a query like “SELECT * FROM Employee WHERE Employee_Name = ‘Abc’ ” is run, the database will check to see if there is an index on the column(s) being queried. Assuming the Employee_Name column does have an index created on it, the database will have to decide whether it actually makes sense to use the index to find the values being searched – because there are some scenarios where it is actually less efficient to use the database index, and more efficient just to scan the entire table.
What is the cost of having a database index?
It takes up space – and the larger your table, the larger your index. Another performance hit with indexes is the fact that whenever you add, delete, or update rows in the corresponding table, the same operations will have to be done to your index. Remember that an index needs to contain the same up to the minute data as whatever is in the table column(s) that the index covers.
As a general rule, an index should only be created on a table if the data in the indexed column will be queried frequently.
See also
You can see dangling commits with git log -g
.
-g, --walk-reflogs
Instead of walking the commit ancestry chain, walk reflog entries from
the most recent one to older ones.
So you could do this to find a particular string in a commit message that is dangling:
git log -g --grep=search_for_this
Alternatively, if you want to search the changes for a particular string, you could use the pickaxe search option, "-S":
git log -g -Ssearch_for_this
# this also works but may be slower, it only shows text-added results
git grep search_for_this $(git log -g --pretty=format:%h)
Git 1.7.4 will add the -G option, allowing you to pass -G<regexp> to find when a line containing <regexp> was moved, which -S cannot do. -S will only tell you when the total number of lines containing the string changed (i.e. adding/removing the string).
Finally, you could use gitk to visualise the dangling commits with:
gitk --all $(git log -g --pretty=format:%h)
And then use its search features to look for the misplaced file. All these work assuming the missing commit has not "expired" and been garbage collected, which may happen if it is dangling for 30 days and you expire reflogs or run a command that expires them.
I also hit this problem and it seems that both most upvoted answers work but there is a tiny problem, webpack throws an error into browser console (Error: Cannot find module './undefined' at webpackContextResolve) which is not very nice.
So I've solved it a bit differently. The whole problem with variable inside require statement is that require statement is executed during bundling and variable inside that statement appears only during app execution in browser. So webpack sees required image as undefined either way, as during compilation that variable doesn't exist.
What I did is place random image into require statement and hiding that image in css, so nobody sees it.
// template
<img class="user-image-svg" :class="[this.hidden? 'hidden' : '']" :src="userAvatar" alt />
//js
data() {
return {
userAvatar: require('@/assets/avatar1.svg'),
hidden: true
}
}
//css
.hidden {display: none}
Image comes as part of information from database via Vuex and is mapped to component as a computed
computed: {
user() {
return this.$store.state.auth.user;
}
}
So once this information is available I swap initial image to the real one
watch: {
user(userData) {
this.userAvatar = require(`@/assets/${userData.avatar}`);
this.hidden = false;
}
}
As mentioned in Yoshua Wuyts' answer, using git branch
:
git branch --unset-upstream
You don't have to delete your local branch.
Simply delete the local branch that is tracking the remote branch:
git branch -d -r origin/<remote branch name>
-r, --remotes
tells git to delete the remote-tracking branch (i.e., delete the branch set to track the remote branch). This will not delete the branch on the remote repo!
See "Having a hard time understanding git-fetch"
there's no such concept of local tracking branches, only remote tracking branches.
Soorigin/master
is a remote tracking branch formaster
in theorigin
repo
As mentioned in Dobes Vandermeer's answer, you also need to reset the configuration associated to the local branch:
git config --unset branch.<branch>.remote
git config --unset branch.<branch>.merge
Remove the upstream information for
<branchname>
.
If no branch is specified it defaults to the current branch.
(git 1.8+, Oct. 2012, commit b84869e by Carlos Martín Nieto (carlosmn
))
That will make any push/pull completely unaware of origin/<remote branch name>
.
Had the same issue with IntelliJ IDEA and following worked.
File
Settings
Build, Execution, Deployments
Build Tools
from drop downMaven
from drop downAlways update snapshots
check boxSwift 4 has addressed this issue by giving Multi line string literal support.To begin string literal add three double quotes marks (”””) and press return key, After pressing return key start writing strings with any variables , line breaks and double quotes just like you would write in notepad or any text editor. To end multi line string literal again write (”””) in new line.
See Below Example
let multiLineStringLiteral = """
This is one of the best feature add in Swift 4
It let’s you write “Double Quotes” without any escaping
and new lines without need of “\n”
"""
print(multiLineStringLiteral)
If your error event handler takes the three arguments (xmlhttprequest, textstatus, and message) when a timeout happens, the status arg will be 'timeout'.
Per the jQuery documentation:
Possible values for the second argument (besides null) are "timeout", "error", "notmodified" and "parsererror".
You can handle your error accordingly then.
I created this fiddle that demonstrates this.
$.ajax({
url: "/ajax_json_echo/",
type: "GET",
dataType: "json",
timeout: 1000,
success: function(response) { alert(response); },
error: function(xmlhttprequest, textstatus, message) {
if(textstatus==="timeout") {
alert("got timeout");
} else {
alert(textstatus);
}
}
});?
With jsFiddle, you can test ajax calls -- it will wait 2 seconds before responding. I put the timeout setting at 1 second, so it should error out and pass back a textstatus of 'timeout' to the error handler.
Hope this helps!
The cleanest way for you to accomplish this is to use the enumitem package (https://ctan.org/pkg/enumitem). For example,
\documentclass{article}
\usepackage{enumitem}% http://ctan.org/pkg/enumitem
\begin{document}
\noindent Here is some text and I want to make sure
there is no spacing the different items.
\begin{itemize}[noitemsep]
\item Item 1
\item Item 2
\item Item 3
\end{itemize}
\noindent Here is some text and I want to make sure
there is no spacing between this line and the item
list below it.
\begin{itemize}[noitemsep,topsep=0pt]
\item Item 1
\item Item 2
\item Item 3
\end{itemize}
\end{document}
Furthermore, if you want to use this setting globally across lists, you can use
\usepackage{enumitem}% http://ctan.org/pkg/enumitem
\setlist[itemize]{noitemsep, topsep=0pt}
However, note that this package does not work well with the beamer package which is used to make presentations in Latex.
I wrote a small class to test which has the better performance of the two and + comes ahead of format. by a factor of 5 to 6. Try it your self
import java.io.*;
import java.util.Date;
public class StringTest{
public static void main( String[] args ){
int i = 0;
long prev_time = System.currentTimeMillis();
long time;
for( i = 0; i< 100000; i++){
String s = "Blah" + i + "Blah";
}
time = System.currentTimeMillis() - prev_time;
System.out.println("Time after for loop " + time);
prev_time = System.currentTimeMillis();
for( i = 0; i<100000; i++){
String s = String.format("Blah %d Blah", i);
}
time = System.currentTimeMillis() - prev_time;
System.out.println("Time after for loop " + time);
}
}
Running the above for different N shows that both behave linearly, but String.format
is 5-30 times slower.
The reason is that in the current implementation String.format
first parses the input with regular expressions and then fills in the parameters. Concatenation with plus, on the other hand, gets optimized by javac (not by the JIT) and uses StringBuilder.append
directly.
Since no one gave this answer, I would also like to add that, you can just add the jdbc driver file(mysql-connector-java-5.1.27-bin.jar
in my case) to the lib folder of your server(Tomcat in my case). Restart the server and it should work.
I had similar issue in Eclipse 3.6 with m2eclipse.
Could not calculate build plan: Failure to transfer org.apache.maven.plugins:maven-resources-plugin:jar:2.4.3 from http://repo1.maven.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced. Original error: Could not transfer artifact org.apache.maven.plugins:maven-resources-plugin:jar:2.4.3 from central (http://repo1.maven.org/maven2): ConnectException project1 Unknown Maven Problem
Deleting all maven*.lastUpdated files from my local reository (as Deepak Joy suggested) solved that problem.
Button click event only.
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js" type="text/javascript"></script>
<script language="javascript" type="text/javascript">
$(document).ready(function () {
$("#btnext").click(function () {
window.open("HTMLPage.htm", "PopupWindow", "width=600,height=600,scrollbars=yes,resizable=no");
});
});
</script>
i think the simplest way is vanilla javascript:
document.onkeyup = function(event) {
if (event.keyCode === 27){
//do something here
}
}
Updated: Changed key => keyCode
If you're testing for all zeros to avoid a warning on another numpy function then wrapping the line in a try, except block will save having to do the test for zeros before the operation you're interested in i.e.
try: # removes output noise for empty slice
mean = np.mean(array)
except:
mean = 0
Here's a one-liner solution in Ruby.
nums.map { |n| (num - [n]).inject(:*) }
When you need to debug, use printf with %p
option is really helpful. You see 0x0 when you have a NULL value.
This answer is not the direct answer for the question. But it should help you figure out which solution fits best for your problem.
When analysing the problem you should activate the debug option -vv
Then rsync will output which files are included or excluded by which pattern:
building file list ...
[sender] hiding file FILE1 because of pattern FILE1*
[sender] showing file FILE2 because of pattern *
You can use the cherry-pick to get the particular bug fix commit(s)
$ git checkout branch
$ git cherry-pick bugfix
Recent bash
versions allow fall-through by using ;&
in stead of ;;
:
they also allow resuming the case checks by using ;;&
there.
for n in 4 14 24 34
do
echo -n "$n = "
case "$n" in
3? )
echo -n thirty-
;;& #resume (to find ?4 later )
"24" )
echo -n twenty-
;& #fallthru
"4" | [13]4)
echo -n four
;;& # resume ( to find teen where needed )
"14" )
echo -n teen
esac
echo
done
sample output
4 = four
14 = fourteen
24 = twenty-four
34 = thirty-four
mins = minutes + 1
should be
minutes = minutes + 1
Also,
minutes = 0
needs to be outside of the while loop.
In SQL Server 2008, you can also just run the standard report Disk Usage by Top Tables. This can be found by right clicking the DB, selecting Reports->Standard Reports and selecting the report you want.
It can be done in the designer as well
Right click on the Index > Properties to get this window
\\loop throgh all rows of the grid view
if (GridView1.Rows[i - 1].Cells[4].Text.ToString() == "value1")
{
GridView1.Rows[i - 1].ForeColor = Color.Black;
}
else if (GridView1.Rows[i - 1].Cells[4].Text.ToString() == "value2")
{
GridView1.Rows[i - 1].ForeColor = Color.Blue;
}
else if (GridView1.Rows[i - 1].Cells[4].Text.ToString() == "value3")
{
GridView1.Rows[i - 1].ForeColor = Color.Red;
}
else if (GridView1.Rows[i - 1].Cells[4].Text.ToString() == "value4")
{
GridView1.Rows[i - 1].ForeColor = Color.Green;
}
Just add position:relative
to the parent element .sidebar-resources-categories
http://jsfiddle.net/matthewabrman/5msuY/
explanation: the ::after elements position is based off of it's parent, in your example you probably had a parent element of the .sidebar-res... which had a set height, therefore it rendered just below it. Adding position relative to the .sidebar-res... makes the after elements move to 100% of it's parent which now becomes the .sidebar-res... because it's position is set to relative. I'm not sure how to explain it but it's expected behaviour.
read more on the subject: http://css-tricks.com/absolute-positioning-inside-relative-positioning/
In App\Http\Middleware\VerifyCsrfToken.php you could try updating the file to something like:
class VerifyCsrfToken extends BaseVerifier {
private $openRoutes =
[
...excluded routes
];
public function handle($request, Closure $next)
{
foreach($this->openRoutes as $route)
{
if ($request->is($route))
{
return $next($request);
}
}
return parent::handle($request, $next);
}
};
This allows you to explicitly bypass specific routes that you do not want verified without disabling csrf validation globally.
I found the below code very useful, it might help anyone who comes searching here
<html>_x000D_
<body>_x000D_
<div style="width: 50%; height: 50%; background-color: green; float:left;">-</div>_x000D_
<div style="width: 50%; height: 50%; background-color: blue; float:right;">-</div>_x000D_
<div style="width: 100%; height: 50%; background-color: red; clear:both">-</div>_x000D_
</body>_x000D_
</html>
_x000D_
This is working for me.
Define a factor:
hospitals.factor<- factor( c("H0","H1","H2") )
and use, in ggplot()
:
facet_grid( hospitals.factor[hospital] ~ . )
Following the currently top voted answer, I've found using a dataset / data attribute works well.
//Javascript
const input1 = document.querySelector("#input1");
input1.value = "0.00";
input1.dataset.value = input1.value;
//dataset.value will set "data-value" on the input1 HTML element
//and will be used by CSS targetting the dataset attribute
document.querySelectorAll("input").forEach((input) => {
input.addEventListener("input", function() {
this.dataset.value = this.value;
console.log(this);
})
})
_x000D_
/*CSS*/
input[data-value="0.00"] {
color: red;
}
_x000D_
<!--HTML-->
<div>
<p>Input1 is programmatically set by JavaScript:</p>
<label for="input1">Input 1:</label>
<input id="input1" value="undefined" data-value="undefined">
</div>
<br>
<div>
<p>Try typing 0.00 inside input2:</p>
<label for="input2">Input 2:</label>
<input id="input2" value="undefined" data-value="undefined">
</div>
_x000D_
You can use switch like this:
XML Layout
<RadioGroup
android:id="@+id/RG"
android:layout_width="wrap_content"
android:layout_height="wrap_content">
<RadioButton
android:id="@+id/R1"
android:layout_width="wrap_contnet"
android:layout_height="wrap_content"
android:text="R1" />
<RadioButton
android:id="@+id/R2"
android:layout_width="wrap_contnet"
android:layout_height="wrap_content"
android:text="R2" />
</RadioGroup>
And JAVA Activity
switch (RG.getCheckedRadioButtonId()) {
case R.id.R1:
regAuxiliar = ultimoRegistro;
case R.id.R2:
regAuxiliar = objRegistro;
default:
regAuxiliar = null; // none selected
}
You will also need to implement an onClick function with button or setOnCheckedChangeListener function to get required functionality.
You need a dict
:
my_dict = {'cheese': 'cake'}
Example code (from the docs):
>>> a = dict(one=1, two=2, three=3)
>>> b = {'one': 1, 'two': 2, 'three': 3}
>>> c = dict(zip(['one', 'two', 'three'], [1, 2, 3]))
>>> d = dict([('two', 2), ('one', 1), ('three', 3)])
>>> e = dict({'three': 3, 'one': 1, 'two': 2})
>>> a == b == c == d == e
True
You can read more about dictionaries here.
You can use git cherry-pick
to just pick the commit that you want to copy over.
Probably the best way is to create the branch out of master, then in that branch use git cherry-pick
on the 2 commits from quickfix2 that you want.
Your usage of now() is correct. However, you need to use one type of quotes around the entire query and another around the values.
You can modify your query to use double quotes at the beginning and end, and single quotes around $somename
:
$update_query = "UPDATE db.tablename SET insert_time=now() WHERE username='$somename'";
You will have to split this into multiple statements. Here is one way to do that:
if (!obj.isPresent()) {
logger.fatal("Object not available");
}
obj.ifPresent(o -> o.setAvailable(true));
return obj;
Another way (possibly over-engineered) is to use map
:
if (!obj.isPresent()) {
logger.fatal("Object not available");
}
return obj.map(o -> {o.setAvailable(true); return o;});
If obj.setAvailable
conveniently returns obj
, then you can simply the second example to:
if (!obj.isPresent()) {
logger.fatal("Object not available");
}
return obj.map(o -> o.setAvailable(true));
The Spring security filter chain is a very complex and flexible engine.
Key filters in the chain are (in the order)
- SecurityContextPersistenceFilter (restores Authentication from JSESSIONID)
- UsernamePasswordAuthenticationFilter (performs authentication)
- ExceptionTranslationFilter (catch security exceptions from FilterSecurityInterceptor)
- FilterSecurityInterceptor (may throw authentication and authorization exceptions)
Looking at the current stable release 4.2.1 documentation, section 13.3 Filter Ordering you could see the whole filter chain's filter organization:
13.3 Filter Ordering
The order that filters are defined in the chain is very important. Irrespective of which filters you are actually using, the order should be as follows:
ChannelProcessingFilter, because it might need to redirect to a different protocol
SecurityContextPersistenceFilter, so a SecurityContext can be set up in the SecurityContextHolder at the beginning of a web request, and any changes to the SecurityContext can be copied to the HttpSession when the web request ends (ready for use with the next web request)
ConcurrentSessionFilter, because it uses the SecurityContextHolder functionality and needs to update the SessionRegistry to reflect ongoing requests from the principal
Authentication processing mechanisms - UsernamePasswordAuthenticationFilter, CasAuthenticationFilter, BasicAuthenticationFilter etc - so that the SecurityContextHolder can be modified to contain a valid Authentication request token
The SecurityContextHolderAwareRequestFilter, if you are using it to install a Spring Security aware HttpServletRequestWrapper into your servlet container
The JaasApiIntegrationFilter, if a JaasAuthenticationToken is in the SecurityContextHolder this will process the FilterChain as the Subject in the JaasAuthenticationToken
RememberMeAuthenticationFilter, so that if no earlier authentication processing mechanism updated the SecurityContextHolder, and the request presents a cookie that enables remember-me services to take place, a suitable remembered Authentication object will be put there
AnonymousAuthenticationFilter, so that if no earlier authentication processing mechanism updated the SecurityContextHolder, an anonymous Authentication object will be put there
ExceptionTranslationFilter, to catch any Spring Security exceptions so that either an HTTP error response can be returned or an appropriate AuthenticationEntryPoint can be launched
FilterSecurityInterceptor, to protect web URIs and raise exceptions when access is denied
Now, I'll try to go on by your questions one by one:
I'm confused how these filters are used. Is it that for the spring provided form-login, UsernamePasswordAuthenticationFilter is only used for /login, and latter filters are not? Does the form-login namespace element auto-configure these filters? Does every request (authenticated or not) reach FilterSecurityInterceptor for non-login url?
Once you are configuring a <security-http>
section, for each one you must at least provide one authentication mechanism. This must be one of the filters which match group 4 in the 13.3 Filter Ordering section from the Spring Security documentation I've just referenced.
This is the minimum valid security:http element which can be configured:
<security:http authentication-manager-ref="mainAuthenticationManager"
entry-point-ref="serviceAccessDeniedHandler">
<security:intercept-url pattern="/sectest/zone1/**" access="hasRole('ROLE_ADMIN')"/>
</security:http>
Just doing it, these filters are configured in the filter chain proxy:
{
"1": "org.springframework.security.web.context.SecurityContextPersistenceFilter",
"2": "org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter",
"3": "org.springframework.security.web.header.HeaderWriterFilter",
"4": "org.springframework.security.web.csrf.CsrfFilter",
"5": "org.springframework.security.web.savedrequest.RequestCacheAwareFilter",
"6": "org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter",
"7": "org.springframework.security.web.authentication.AnonymousAuthenticationFilter",
"8": "org.springframework.security.web.session.SessionManagementFilter",
"9": "org.springframework.security.web.access.ExceptionTranslationFilter",
"10": "org.springframework.security.web.access.intercept.FilterSecurityInterceptor"
}
Note: I get them by creating a simple RestController which @Autowires the FilterChainProxy and returns it's contents:
@Autowired
private FilterChainProxy filterChainProxy;
@Override
@RequestMapping("/filterChain")
public @ResponseBody Map<Integer, Map<Integer, String>> getSecurityFilterChainProxy(){
return this.getSecurityFilterChainProxy();
}
public Map<Integer, Map<Integer, String>> getSecurityFilterChainProxy(){
Map<Integer, Map<Integer, String>> filterChains= new HashMap<Integer, Map<Integer, String>>();
int i = 1;
for(SecurityFilterChain secfc : this.filterChainProxy.getFilterChains()){
//filters.put(i++, secfc.getClass().getName());
Map<Integer, String> filters = new HashMap<Integer, String>();
int j = 1;
for(Filter filter : secfc.getFilters()){
filters.put(j++, filter.getClass().getName());
}
filterChains.put(i++, filters);
}
return filterChains;
}
Here we could see that just by declaring the <security:http>
element with one minimum configuration, all the default filters are included, but none of them is of a Authentication type (4th group in 13.3 Filter Ordering section). So it actually means that just by declaring the security:http
element, the SecurityContextPersistenceFilter, the ExceptionTranslationFilter and the FilterSecurityInterceptor are auto-configured.
In fact, one authentication processing mechanism should be configured, and even security namespace beans processing claims for that, throwing an error during startup, but it can be bypassed adding an entry-point-ref attribute in <http:security>
If I add a basic <form-login>
to the configuration, this way:
<security:http authentication-manager-ref="mainAuthenticationManager">
<security:intercept-url pattern="/sectest/zone1/**" access="hasRole('ROLE_ADMIN')"/>
<security:form-login />
</security:http>
Now, the filterChain will be like this:
{
"1": "org.springframework.security.web.context.SecurityContextPersistenceFilter",
"2": "org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter",
"3": "org.springframework.security.web.header.HeaderWriterFilter",
"4": "org.springframework.security.web.csrf.CsrfFilter",
"5": "org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter",
"6": "org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter",
"7": "org.springframework.security.web.savedrequest.RequestCacheAwareFilter",
"8": "org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter",
"9": "org.springframework.security.web.authentication.AnonymousAuthenticationFilter",
"10": "org.springframework.security.web.session.SessionManagementFilter",
"11": "org.springframework.security.web.access.ExceptionTranslationFilter",
"12": "org.springframework.security.web.access.intercept.FilterSecurityInterceptor"
}
Now, this two filters org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter and org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter are created and configured in the FilterChainProxy.
So, now, the questions:
Is it that for the spring provided form-login, UsernamePasswordAuthenticationFilter is only used for /login, and latter filters are not?
Yes, it is used to try to complete a login processing mechanism in case the request matches the UsernamePasswordAuthenticationFilter url. This url can be configured or even changed it's behaviour to match every request.
You could too have more than one Authentication processing mechanisms configured in the same FilterchainProxy (such as HttpBasic, CAS, etc).
Does the form-login namespace element auto-configure these filters?
No, the form-login element configures the UsernamePasswordAUthenticationFilter, and in case you don't provide a login-page url, it also configures the org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter, which ends in a simple autogenerated login page.
The other filters are auto-configured by default just by creating a <security:http>
element with no security:"none"
attribute.
Does every request (authenticated or not) reach FilterSecurityInterceptor for non-login url?
Every request should reach it, as it is the element which takes care of whether the request has the rights to reach the requested url. But some of the filters processed before might stop the filter chain processing just not calling FilterChain.doFilter(request, response);
. For example, a CSRF filter might stop the filter chain processing if the request has not the csrf parameter.
What if I want to secure my REST API with JWT-token, which is retrieved from login? I must configure two namespace configuration http tags, rights? Other one for /login with
UsernamePasswordAuthenticationFilter
, and another one for REST url's, with customJwtAuthenticationFilter
.
No, you are not forced to do this way. You could declare both UsernamePasswordAuthenticationFilter
and the JwtAuthenticationFilter
in the same http element, but it depends on the concrete behaviour of each of this filters. Both approaches are possible, and which one to choose finnally depends on own preferences.
Does configuring two http elements create two springSecurityFitlerChains?
Yes, that's true
Is UsernamePasswordAuthenticationFilter turned off by default, until I declare form-login?
Yes, you could see it in the filters raised in each one of the configs I posted
How do I replace SecurityContextPersistenceFilter with one, which will obtain Authentication from existing JWT-token rather than JSESSIONID?
You could avoid SecurityContextPersistenceFilter, just configuring session strategy in <http:element>
. Just configure like this:
<security:http create-session="stateless" >
Or, In this case you could overwrite it with another filter, this way inside the <security:http>
element:
<security:http ...>
<security:custom-filter ref="myCustomFilter" position="SECURITY_CONTEXT_FILTER"/>
</security:http>
<beans:bean id="myCustomFilter" class="com.xyz.myFilter" />
EDIT:
One question about "You could too have more than one Authentication processing mechanisms configured in the same FilterchainProxy". Will the latter overwrite the authentication performed by first one, if declaring multiple (Spring implementation) authentication filters? How this relates to having multiple authentication providers?
This finally depends on the implementation of each filter itself, but it's true the fact that the latter authentication filters at least are able to overwrite any prior authentication eventually made by preceding filters.
But this won't necesarily happen. I have some production cases in secured REST services where I use a kind of authorization token which can be provided both as a Http header or inside the request body. So I configure two filters which recover that token, in one case from the Http Header and the other from the request body of the own rest request. It's true the fact that if one http request provides that authentication token both as Http header and inside the request body, both filters will try to execute the authentication mechanism delegating it to the manager, but it could be easily avoided simply checking if the request is already authenticated just at the begining of the doFilter()
method of each filter.
Having more than one authentication filter is related to having more than one authentication providers, but don't force it. In the case I exposed before, I have two authentication filter but I only have one authentication provider, as both of the filters create the same type of Authentication object so in both cases the authentication manager delegates it to the same provider.
And opposite to this, I too have a scenario where I publish just one UsernamePasswordAuthenticationFilter but the user credentials both can be contained in DB or LDAP, so I have two UsernamePasswordAuthenticationToken supporting providers, and the AuthenticationManager delegates any authentication attempt from the filter to the providers secuentially to validate the credentials.
So, I think it's clear that neither the amount of authentication filters determine the amount of authentication providers nor the amount of provider determine the amount of filters.
Also, documentation states SecurityContextPersistenceFilter is responsible of cleaning the SecurityContext, which is important due thread pooling. If I omit it or provide custom implementation, I have to implement the cleaning manually, right? Are there more similar gotcha's when customizing the chain?
I did not look carefully into this filter before, but after your last question I've been checking it's implementation, and as usually in Spring, nearly everything could be configured, extended or overwrited.
The SecurityContextPersistenceFilter delegates in a SecurityContextRepository implementation the search for the SecurityContext. By default, a HttpSessionSecurityContextRepository is used, but this could be changed using one of the constructors of the filter. So it may be better to write an SecurityContextRepository which fits your needs and just configure it in the SecurityContextPersistenceFilter, trusting in it's proved behaviour rather than start making all from scratch.
The typical command is:
docker container ls -f 'status=exited'
However, this will only list one of the possible non-running statuses. Here's a list of all possible statuses:
You can filter on multiple statuses by passing multiple filters on the status:
docker container ls -f 'status=exited' -f 'status=dead' -f 'status=created'
If you are integrating this with an automatic cleanup script, you can chain one command to another with some bash syntax, output just the container id's with -q
, and you can also limit to just the containers that exited successfully with an exit code filter:
docker container rm $(docker container ls -q -f 'status=exited' -f 'exited=0')
For more details on filters you can use, see Docker's documentation: https://docs.docker.com/engine/reference/commandline/ps/#filtering
There is no simple way to achieves this directly. But P99 has macros that allow you to create such type of function automatically:
P99_DECLARE_ENUM(color, red, green, blue);
in a header file, and
P99_DEFINE_ENUM(color);
in one compilation unit (.c file) should then do the trick, in that example the function then would be called color_getname
.
With Scanner the default delimiters are the whitespace characters.
But Scanner can define where a token starts and ends based on a set of delimiter, wich could be specified in two ways:
So useDelimiter()
methods are used to tokenize the Scanner input, and behave like StringTokenizer class, take a look at these tutorials for further information:
And here is an Example:
public static void main(String[] args) {
// Initialize Scanner object
Scanner scan = new Scanner("Anna Mills/Female/18");
// initialize the string delimiter
scan.useDelimiter("/");
// Printing the tokenized Strings
while(scan.hasNext()){
System.out.println(scan.next());
}
// closing the scanner stream
scan.close();
}
Prints this output:
Anna Mills
Female
18
The same thing happened to me as I was installing the HPCC benchmark (includes HPL and a few other benchmarks). I added -lm
to the compiler flags in my build script and then it successfully compiled.
Number((6.688689).toFixed(1)); // 6.7
var number = 6.688689;
var roundedNumber = Math.round(number * 10) / 10;
Use toFixed()
function.
(6.688689).toFixed(); // equal to "7"
(6.688689).toFixed(1); // equal to "6.7"
(6.688689).toFixed(2); // equal to "6.69"
For some reason AngularJS allows to get me confused. Their documentation is pretty horrible on this. More good examples of variations would be welcome.
Anyway, I have a slight variation on Ben Lesh's answer.
My data collections looks like this:
items =
[
{ key:"AD",value:"Andorra" }
, { key:"AI",value:"Anguilla" }
, { key:"AO",value:"Angola" }
...etc..
]
Now
<select ng-model="countries" ng-options="item.key as item.value for item in items"></select>
still resulted in the options value to be the index (0, 1, 2, etc.).
Adding Track By fixed it for me:
<select ng-model="blah" ng-options="item.value for item in items track by item.key"></select>
I reckon it happens more often that you want to add an array of objects into an select list, so I am going to remember this one!
Be aware that from AngularJS 1.4 you can't use ng-options any more, but you need to use ng-repeat
on your option tag:
<select name="test">
<option ng-repeat="item in items" value="{{item.key}}">{{item.value}}</option>
</select>
I make a link. A link is a link. A link navigates to another page. That is what links are for and everybody understands that. So Method 3 is the only correct method in my book.
I wouldn't want my link to look like a button at all, and when I do, I still think functionality is more important than looks.
Buttons are less accessible, not only due to the need of Javascript, but also because tools for the visually impaired may not understand this Javascript enhanced button well.
Method 4 would work as well, but it is more a trick than a real functionality. You abuse a form to post 'nothing' to this other page. It's not clean.
It can't take a file handle. The install_requires
argument can only be a string or a list of strings.
You can, of course, read your file in the setup script and pass it as a list of strings to install_requires
.
import os
from setuptools import setup
with open('requirements.txt') as f:
required = f.read().splitlines()
setup(...
install_requires=required,
...)
I am not going to use this pattern in my own code since I'm not a big fan of using global variables. However, in a pinch it will work.
User is a promisified Mongoose model.
var globalVar = '';
User.findAsync({}).then(function(users){
globalVar = users;
}).then(function(){
console.log(globalVar);
});
A real working solution with no other dependencies than angularjs (tested with v.1.0.6)
html
<input type="file" name="file" onchange="angular.element(this).scope().uploadFile(this.files)"/>
Angularjs (1.0.6) not support ng-model on "input-file" tags so you have to do it in a "native-way" that pass the all (eventually) selected files from the user.
controller
$scope.uploadFile = function(files) {
var fd = new FormData();
//Take the first selected file
fd.append("file", files[0]);
$http.post(uploadUrl, fd, {
withCredentials: true,
headers: {'Content-Type': undefined },
transformRequest: angular.identity
}).success( ...all right!... ).error( ..damn!... );
};
The cool part is the undefined content-type and the transformRequest: angular.identity that give at the $http the ability to choose the right "content-type" and manage the boundary needed when handling multipart data.
I compared the efficiency of the most popular suggestions to determine if a number is prime. I used python 3.6
on ubuntu 17.10
; I tested with numbers up to 100.000 (you can test with bigger numbers using my code below).
This first plot compares the functions (which are explained further down in my answer), showing that the last functions do not grow as fast as the first one when increasing the numbers.
And in the second plot we can see that in case of prime numbers the time grows steadily, but non-prime numbers do not grow so fast in time (because most of them can be eliminated early on).
Here are the functions I used:
this answer and this answer suggested a construct using all()
:
def is_prime_1(n):
return n > 1 and all(n % i for i in range(2, int(math.sqrt(n)) + 1))
This answer used some kind of while loop:
def is_prime_2(n):
if n <= 1:
return False
if n == 2:
return True
if n == 3:
return True
if n % 2 == 0:
return False
if n % 3 == 0:
return False
i = 5
w = 2
while i * i <= n:
if n % i == 0:
return False
i += w
w = 6 - w
return True
This answer included a version with a for
loop:
def is_prime_3(n):
if n <= 1:
return False
if n % 2 == 0 and n > 2:
return False
for i in range(3, int(math.sqrt(n)) + 1, 2):
if n % i == 0:
return False
return True
And I mixed a few ideas from the other answers into a new one:
def is_prime_4(n):
if n <= 1: # negative numbers, 0 or 1
return False
if n <= 3: # 2 and 3
return True
if n % 2 == 0 or n % 3 == 0:
return False
for i in range(5, int(math.sqrt(n)) + 1, 2):
if n % i == 0:
return False
return True
Here is my script to compare the variants:
import math
import pandas as pd
import seaborn as sns
import time
from matplotlib import pyplot as plt
def is_prime_1(n):
...
def is_prime_2(n):
...
def is_prime_3(n):
...
def is_prime_4(n):
...
default_func_list = (is_prime_1, is_prime_2, is_prime_3, is_prime_4)
def assert_equal_results(func_list=default_func_list, n):
for i in range(-2, n):
r_list = [f(i) for f in func_list]
if not all(r == r_list[0] for r in r_list):
print(i, r_list)
raise ValueError
print('all functions return the same results for integers up to {}'.format(n))
def compare_functions(func_list=default_func_list, n):
result_list = []
n_measurements = 3
for f in func_list:
for i in range(1, n + 1):
ret_list = []
t_sum = 0
for _ in range(n_measurements):
t_start = time.perf_counter()
is_prime = f(i)
t_end = time.perf_counter()
ret_list.append(is_prime)
t_sum += (t_end - t_start)
is_prime = ret_list[0]
assert all(ret == is_prime for ret in ret_list)
result_list.append((f.__name__, i, is_prime, t_sum / n_measurements))
df = pd.DataFrame(
data=result_list,
columns=['f', 'number', 'is_prime', 't_seconds'])
df['t_micro_seconds'] = df['t_seconds'].map(lambda x: round(x * 10**6, 2))
print('df.shape:', df.shape)
print()
print('', '-' * 41)
print('| {:11s} | {:11s} | {:11s} |'.format(
'is_prime', 'count', 'percent'))
df_sub1 = df[df['f'] == 'is_prime_1']
print('| {:11s} | {:11,d} | {:9.1f} % |'.format(
'all', df_sub1.shape[0], 100))
for (is_prime, count) in df_sub1['is_prime'].value_counts().iteritems():
print('| {:11s} | {:11,d} | {:9.1f} % |'.format(
str(is_prime), count, count * 100 / df_sub1.shape[0]))
print('', '-' * 41)
print()
print('', '-' * 69)
print('| {:11s} | {:11s} | {:11s} | {:11s} | {:11s} |'.format(
'f', 'is_prime', 't min (us)', 't mean (us)', 't max (us)'))
for f, df_sub1 in df.groupby(['f', ]):
col = df_sub1['t_micro_seconds']
print('|{0}|{0}|{0}|{0}|{0}|'.format('-' * 13))
print('| {:11s} | {:11s} | {:11.2f} | {:11.2f} | {:11.2f} |'.format(
f, 'all', col.min(), col.mean(), col.max()))
for is_prime, df_sub2 in df_sub1.groupby(['is_prime', ]):
col = df_sub2['t_micro_seconds']
print('| {:11s} | {:11s} | {:11.2f} | {:11.2f} | {:11.2f} |'.format(
f, str(is_prime), col.min(), col.mean(), col.max()))
print('', '-' * 69)
return df
Running the function compare_functions(n=10**5)
(numbers up to 100.000) I get this output:
df.shape: (400000, 5)
-----------------------------------------
| is_prime | count | percent |
| all | 100,000 | 100.0 % |
| False | 90,408 | 90.4 % |
| True | 9,592 | 9.6 % |
-----------------------------------------
---------------------------------------------------------------------
| f | is_prime | t min (us) | t mean (us) | t max (us) |
|-------------|-------------|-------------|-------------|-------------|
| is_prime_1 | all | 0.57 | 2.50 | 154.35 |
| is_prime_1 | False | 0.57 | 1.52 | 154.35 |
| is_prime_1 | True | 0.89 | 11.66 | 55.54 |
|-------------|-------------|-------------|-------------|-------------|
| is_prime_2 | all | 0.24 | 1.14 | 304.82 |
| is_prime_2 | False | 0.24 | 0.56 | 304.82 |
| is_prime_2 | True | 0.25 | 6.67 | 48.49 |
|-------------|-------------|-------------|-------------|-------------|
| is_prime_3 | all | 0.20 | 0.95 | 50.99 |
| is_prime_3 | False | 0.20 | 0.60 | 40.62 |
| is_prime_3 | True | 0.58 | 4.22 | 50.99 |
|-------------|-------------|-------------|-------------|-------------|
| is_prime_4 | all | 0.20 | 0.89 | 20.09 |
| is_prime_4 | False | 0.21 | 0.53 | 14.63 |
| is_prime_4 | True | 0.20 | 4.27 | 20.09 |
---------------------------------------------------------------------
Then, running the function compare_functions(n=10**6)
(numbers up to 1.000.000) I get this output:
df.shape: (4000000, 5)
-----------------------------------------
| is_prime | count | percent |
| all | 1,000,000 | 100.0 % |
| False | 921,502 | 92.2 % |
| True | 78,498 | 7.8 % |
-----------------------------------------
---------------------------------------------------------------------
| f | is_prime | t min (us) | t mean (us) | t max (us) |
|-------------|-------------|-------------|-------------|-------------|
| is_prime_1 | all | 0.51 | 5.39 | 1414.87 |
| is_prime_1 | False | 0.51 | 2.19 | 413.42 |
| is_prime_1 | True | 0.87 | 42.98 | 1414.87 |
|-------------|-------------|-------------|-------------|-------------|
| is_prime_2 | all | 0.24 | 2.65 | 612.69 |
| is_prime_2 | False | 0.24 | 0.89 | 322.81 |
| is_prime_2 | True | 0.24 | 23.27 | 612.69 |
|-------------|-------------|-------------|-------------|-------------|
| is_prime_3 | all | 0.20 | 1.93 | 67.40 |
| is_prime_3 | False | 0.20 | 0.82 | 61.39 |
| is_prime_3 | True | 0.59 | 14.97 | 67.40 |
|-------------|-------------|-------------|-------------|-------------|
| is_prime_4 | all | 0.18 | 1.88 | 332.13 |
| is_prime_4 | False | 0.20 | 0.74 | 311.94 |
| is_prime_4 | True | 0.18 | 15.23 | 332.13 |
---------------------------------------------------------------------
I used the following script to plot the results:
def plot_1(func_list=default_func_list, n):
df_orig = compare_functions(func_list=func_list, n=n)
df_filtered = df_orig[df_orig['t_micro_seconds'] <= 20]
sns.lmplot(
data=df_filtered, x='number', y='t_micro_seconds',
col='f',
# row='is_prime',
markers='.',
ci=None)
plt.ticklabel_format(style='sci', axis='x', scilimits=(3, 3))
plt.show()
One doesn't need to give id for the controller. It can simply called as following
angular.element(document.querySelector('[ng-controller="HeaderCtrl"]')).scope().myFunc()
Here HeaderCtrl is the controller name defined in your JS
Every time you are creating plots you might get this error - "Error in plot.new() : figure margins too large
". To avoid such errors you can first check par("mar")
output. You should be getting:
[1] 5.1 4.1 4.1 2.1
To change that write:
par(mar=c(1,1,1,1))
This should rectify the error. Or else you can change the values accordingly.
Hope this works for you.
Had the same issue, the reason for it was BCrypt.Net library, compiled using .NET 2.0 framework, while the whole project, which used it, was compiling with .NET 4.0. If symptoms are the same, try download BCrypt source code and rebuild it in release configuration within .NET 4.0. After I'd done it "pre-login handshake" worked fine. Hope it helps anyone.
enable PAE/NX in virtualbox network config
#Try this:
input[type="text"],textarea[type="text"]::-webkit-input-placeholder {
color:#f51;
}
input[type="text"],textarea[type="text"]:-moz-placeholder {
color:#f51;
}
input[type="text"],textarea[type="text"]::-moz-placeholder {
color:#f51;
}
input[type="text"],textarea[type="text"]:-ms-input-placeholder {
color:#f51;
}
##Works very well for me.
The Mozilla Developer Network provides the following explanation:
event = document.createEvent("KeyboardEvent")
using:
event.initKeyEvent (type, bubbles, cancelable, viewArg,
ctrlKeyArg, altKeyArg, shiftKeyArg, metaKeyArg,
keyCodeArg, charCodeArg)
yourElement.dispatchEvent(event)
I don't see the last one in your code, maybe that's what you're missing. I hope this works in IE as well...
This works for me. Load your properties file from current directory
.
Attention: The method Properties#load
uses ISO-8859-1 encoding.
Properties properties = new Properties();
properties.load(new FileReader(new File(".").getCanonicalPath() + File.separator + "java.properties"));
properties.forEach((k, v) -> {
System.out.println(k + " : " + v);
});
Make sure, that java.properties
is at the current directory
. You can just write a little startup script that switches into to the right directory in before, like
#! /bin/bash
scriptdir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd $scriptdir
java -jar MyExecutable.jar
cd -
In your project just put the java.properties
file in your project root, in order to make this code work from your IDE as well.
I think what you want is to swap out at runtime a version of your config file, if so create a copy of your config file (also give it the relevant extension like .Debug or .Release) that has the correct addresses (which gives you a debug version and a runtime version ) and create a postbuild step that copies the correct file depending on the build type.
Here is an example of a postbuild event that I've used in the past which overrides the output file with the correct version (debug/runtime)
copy "$(ProjectDir)ServiceReferences.ClientConfig.$(ConfigurationName)" "$(ProjectDir)ServiceReferences.ClientConfig" /Y
where : $(ProjectDir) is the project directory where the config files are located $(ConfigurationName) is the active configuration build type
EDIT: Please see Marc's answer for a detailed explanation on how to do this programmatically.
postgres
UPDATE table1
SET COLUMN = value
FROM table2,
table3
WHERE table1.column_id = table2.id
AND table1.column_id = table3.id
AND table1.COLUMN = value
AND table2.COLUMN = value
AND table3.COLUMN = value
Lets see an example with Select control
binded to: $scope.cboPais,
source: $scope.geoPaises
HTML
<select
ng-model="cboPais"
ng-options="item.strPais for item in geoPaises"
></select>
JavaScript
$http.get(strUrl2).success(function (response) {
if (response.length > 0) {
$scope.geoPaises = response; //Data source
nIndex = indexOfUnsortedArray(response, 'iPais', default_values.iPais); //array index of default value, using a custom function to search
if (nIndex >= 0) {
$scope.cboPais = response[nIndex]; //if index of array was found
} else {
$scope.cboPais = response[0]; //select the first element of array
}
$scope.geo_getDepartamentos();
}
}
here is a useful gist from lfmingo
OkHttpClient.Builder httpClient = new OkHttpClient.Builder();
httpClient.addInterceptor(new Interceptor() {
@Override
public Response intercept(Interceptor.Chain chain) throws IOException {
Request original = chain.request();
Request request = original.newBuilder()
.header("User-Agent", "Your-App-Name")
.header("Accept", "application/vnd.yourapi.v1.full+json")
.method(original.method(), original.body())
.build();
return chain.proceed(request);
}
}
OkHttpClient client = httpClient.build();
Retrofit retrofit = new Retrofit.Builder()
.baseUrl(API_BASE_URL)
.addConverterFactory(GsonConverterFactory.create())
.client(client)
.build();
For my case adding an empty list did the job.
List<Object> data = new ArrayList<>();
adapter.setData(data);
adapter.notifyDataSetChanged();
This question is now a bit old... but I'll throw my hat into the ring.
I personally find an algorithm of the form find_paths[s, t, d, k]
useful, where:
Using your programming language's form of infinity for d
and k
will give you all paths§.
§ obviously if you are using a directed graph and you want all undirected paths between s
and t
you will have to run this both ways:
find_paths[s, t, d, k] <join> find_paths[t, s, d, k]
I personally like recursion, although it can difficult some times, anyway first lets define our helper function:
def find_paths_recursion(graph, current, goal, current_depth, max_depth, num_paths, current_path, paths_found)
current_path.append(current)
if current_depth > max_depth:
return
if current == goal:
if len(paths_found) <= number_of_paths_to_find:
paths_found.append(copy(current_path))
current_path.pop()
return
else:
for successor in graph[current]:
self.find_paths_recursion(graph, successor, goal, current_depth + 1, max_depth, num_paths, current_path, paths_found)
current_path.pop()
With that out of the way, the core function is trivial:
def find_paths[s, t, d, k]:
paths_found = [] # PASSING THIS BY REFERENCE
find_paths_recursion(s, t, 0, d, k, [], paths_found)
First, lets notice a few thing:
[]
is an uninitialized list, replace this with the equivalent for your programming language of choicepaths_found
is passed by reference. It is clear that the recursion function doesn't return anything. Handle this appropriately.graph
is assuming some form of hashed
structure. There are a plethora of ways to implement a graph. Either way, graph[vertex]
gets you a list of adjacent vertices in a directed graph - adjust accordingly.You can also do this:
//Get the current claims principal
var identity = (ClaimsPrincipal)Thread.CurrentPrincipal;
var claims = identity.Claims;
Update
To provide further explanation as per comments.
If you are creating users within your system as follows:
UserManager<applicationuser> userManager = new UserManager<applicationuser>(new UserStore<applicationuser>(new SecurityContext()));
ClaimsIdentity identity = userManager.CreateIdentity(user, DefaultAuthenticationTypes.ApplicationCookie);
You should automatically have some Claims populated relating to you Identity.
To add customized claims after a user authenticates you can do this as follows:
var user = userManager.Find(userName, password);
identity.AddClaim(new Claim(ClaimTypes.Email, user.Email));
The claims can be read back out as Darin has answered above or as I have.
The claims are persisted when you call below passing the identity in:
AuthenticationManager.SignIn(new AuthenticationProperties() { IsPersistent = persistCookie }, identity);
HTML:
<div id="my-div" class="hide">Hello, TB3</div>
Javascript:
$(function(){
//If the HIDE class exists then remove it, But first hide DIV
if ( $("#my-div").hasClass( 'hide' ) ) $("#my-div").hide().removeClass('hide');
//Now, you can use any of these functions to display
$("#my-div").show();
//$("#my-div").fadeIn();
//$("#my-div").toggle();
});
I ran into this problem and it seems to be caused (in my case) by clicking an element that pops a div in front of the clicked element. I got around this by wrapping my click in a big 'ol try catch block.
This solution is for those users who use GitHub desktop.
Rename your repository from setting on GitHub.com
Now from your desktop click on sync.
Done.
Today being 2016, I think the cleanest solution is provided by pandas Timestamp:
from datetime import date
import pandas as pd
d = date.today()
pd.Timestamp(d)
Timestamp is the pandas equivalent of datetime and is interchangable with it in most cases. Check:
from datetime import datetime
isinstance(pd.Timestamp(d), datetime)
But in case you really want a vanilla datetime, you can still do:
pd.Timestamp(d).to_datetime()
Timestamps are a lot more powerful than datetimes, amongst others when dealing with timezones. Actually, Timestamps are so powerful that it's a pity they are so poorly documented...
I ended creating my own functions:
@Component({
selector: 'cookie-consent',
template: cookieconsent_html,
styles: [cookieconsent_css]
})
export class CookieConsent {
private isConsented: boolean = false;
constructor() {
this.isConsented = this.getCookie(COOKIE_CONSENT) === '1';
}
private getCookie(name: string) {
let ca: Array<string> = document.cookie.split(';');
let caLen: number = ca.length;
let cookieName = `${name}=`;
let c: string;
for (let i: number = 0; i < caLen; i += 1) {
c = ca[i].replace(/^\s+/g, '');
if (c.indexOf(cookieName) == 0) {
return c.substring(cookieName.length, c.length);
}
}
return '';
}
private deleteCookie(name) {
this.setCookie(name, '', -1);
}
private setCookie(name: string, value: string, expireDays: number, path: string = '') {
let d:Date = new Date();
d.setTime(d.getTime() + expireDays * 24 * 60 * 60 * 1000);
let expires:string = `expires=${d.toUTCString()}`;
let cpath:string = path ? `; path=${path}` : '';
document.cookie = `${name}=${value}; ${expires}${cpath}`;
}
private consent(isConsent: boolean, e: any) {
if (!isConsent) {
return this.isConsented;
} else if (isConsent) {
this.setCookie(COOKIE_CONSENT, '1', COOKIE_CONSENT_EXPIRE_DAYS);
this.isConsented = true;
e.preventDefault();
}
}
}
I had a similar problem. Just to help out someone with the same issue:
My error was the user file attribute for the files in /var/www. After changing them back to the user "www-data", the problem was gone.
The correct way of use forms now in Angular2 is:
<form (ngSubmit)="onSubmit()">
<label>Username:</label>
<input type="text" class="form-control" [(ngModel)]="user.username" name="username" #username="ngModel" required />
<label>Contraseña:</label>
<input type="password" class="form-control" [(ngModel)]="user.password" name="password" #password="ngModel" required />
<input type="submit" value="Entrar" class="btn btn-primary"/>
</form>
The old way doesn't works anymore
You can do it in a beginners style by using control statements and loops..
#include <iostream>
using namespace std;
int main(){
int arr[] = {10,20,30,40,50}, toFind= 10, notFound = -1;
for(int i = 0; i<=sizeof(arr); i++){
if(arr[i] == toFind){
cout<< "Element is found at " <<i <<" index" <<endl;
return 0;
}
}
cout<<notFound<<endl;
}
C has a boolean type: bool (at least for the last 10(!) years)
Include stdbool.h and true/false will work as expected.
FatJar can help you in this case.
In addition to the"Export as Jar" function which is included to Eclipse the Plug-In bundles all dependent JARs together into one executable jar.
The Plug-In adds the Entry "Build Fat Jar" to the Context-Menu of Java-projects
This is useful if your final exported jar includes other external jars.
If you have Ganymede, the Export Jar dialog is enough to export your resources from your project.
After Ganymede, you have:
I've had this error when there's been different RxJS-versions across projects. The internal checks in RxJS fails because there are several different Symbol_observable
. Eventually this function throws once called from a flattening operator like switchMap
.
Try importing symbol-observable in some entry point.
// main index.ts
import 'symbol-observable';
This code:
from scipy.stats import linregress
linregress(x,y) #x and y are arrays or lists.
gives out a list with the following:
slope : float
slope of the regression line
intercept : float
intercept of the regression line
r-value : float
correlation coefficient
p-value : float
two-sided p-value for a hypothesis test whose null hypothesis is that the slope is zero
stderr : float
Standard error of the estimate
Firstly, you are using Capacity
instead of Count
.
Secondly, if you only need to delete one item, then you can happily use a loop. You just need to ensure that you break out of the loop after deleting an item, like so:
int target = 4;
for (int i = 0; i < list.Count; ++i)
{
if (list[i].UniqueID == target)
{
list.RemoveAt(i);
break;
}
}
If you want to remove all items from the list that match an ID, it becomes even easier because you can use List<T>.RemoveAll(Predicate<T> match)
int target = 4;
list.RemoveAll(element => element.UniqueID == target);
You should be able to do this quite easily:
SELECT *
FROM WebPageContent
WHERE data.value('(/PageContent/Text)[1]', 'varchar(100)') LIKE 'XYZ%'
The .value
method gives you the actual value, and you can define that to be returned as a VARCHAR(), which you can then check with a LIKE statement.
Mind you, this isn't going to be awfully fast. So if you have certain fields in your XML that you need to inspect a lot, you could:
With this, you'd basically "extract" a certain portion of the XML into a computed field, make it persisted, and then you can search very efficiently on it (heck: you can even INDEX that field!).
Marc
You can read existing sheets of your interests, for example, 'x1', 'x2', into memory and 'write' them back prior to adding more new sheets (keep in mind that sheets in a file and sheets in memory are two different things, if you don't read them, they will be lost). This approach uses 'xlsxwriter' only, no openpyxl involved.
import pandas as pd
import numpy as np
path = r"C:\Users\fedel\Desktop\excelData\PhD_data.xlsx"
# begin <== read selected sheets and write them back
df1 = pd.read_excel(path, sheet_name='x1', index_col=0) # or sheet_name=0
df2 = pd.read_excel(path, sheet_name='x2', index_col=0) # or sheet_name=1
writer = pd.ExcelWriter(path, engine='xlsxwriter')
df1.to_excel(writer, sheet_name='x1')
df2.to_excel(writer, sheet_name='x2')
# end ==>
# now create more new sheets
x3 = np.random.randn(100, 2)
df3 = pd.DataFrame(x3)
x4 = np.random.randn(100, 2)
df4 = pd.DataFrame(x4)
df3.to_excel(writer, sheet_name='x3')
df4.to_excel(writer, sheet_name='x4')
writer.save()
writer.close()
If you want to preserve all existing sheets, you can replace above code between begin and end with:
# read all existing sheets and write them back
writer = pd.ExcelWriter(path, engine='xlsxwriter')
xlsx = pd.ExcelFile(path)
for sheet in xlsx.sheet_names:
df = xlsx.parse(sheet_name=sheet, index_col=0)
df.to_excel(writer, sheet_name=sheet)
Use \Q
to autoescape any potentially problematic characters in your variable.
if($text_to_search =~ m/\Q$search_string/) print "wee";
There is no .length property in C. The .length property can only be applied to arrays in object oriented programming (OOP) languages. The .length property is inherited from the object class; the class all other classes & objects inherit from in an OOP language. Also, one would use .length-1 to return the number of the last index in an array; using just the .length will return the total length of the array.
I would suggest something like this:
int index;
int jdex;
for( index = 0; index < (sizeof( my_array ) / sizeof( my_array[0] )); index++){
for( jdex = 0; jdex < (sizeof( my_array ) / sizeof( my_array[0] )); jdex++){
printf( "%d", my_array[index][jdex] );
printf( "\n" );
}
}
The line (sizeof( my_array ) / sizeof( my_array[0] )) will give you the size of the array in question. The sizeof property will return the length in bytes, so one must divide the total size of the array in bytes by how many bytes make up each element, each element takes up 4 bytes because each element is of type int, respectively. The array is of total size 16 bytes and each element is of 4 bytes so 16/4 yields 4 the total number of elements in your array because indexing starts at 0 and not 1.
Even though answers to this question are long overdue, but I'm still posting a nice solution that I came with some time ago and makes it really simple to send complex JSON to Asp.net MVC controller actions so they are model bound to whatever strong type parameters.
This plugin supports dates just as well, so they get converted to their DateTime
counterpart without a problem.
You can find all the details in my blog post where I examine the problem and provide code necessary to accomplish this.
All you have to do is to use this plugin on the client side. An Ajax request would look like this:
$.ajax({
type: "POST",
url: "SomeURL",
data: $.toDictionary(yourComplexJSONobject),
success: function() { ... },
error: function() { ... }
});
But this is just part of the whole problem. Now we are able to post complex JSON back to server, but since it will be model bound to a complex type that may have validation attributes on properties things may fail at that point. I've got a solution for it as well. My solution takes advantage of jQuery Ajax functionality where results can be successful or erroneous (just as shown in the upper code). So when validation would fail, error
function would get called as it's supposed to be.
Without you providing any specific error message, it's hard to give specific suggestions.
However, I was using the same example and was getting a permissions exception on File.OpenWrite - using the localFileName variable, because using Path.GetFile was pointing to a location that obviously would not have permissions for opening a file > C:\ProgramFiles\IIS(Express)\filename.doc
I found that using System.IO.Path.GetFileName is not correct, use System.IO.Path.GetFullPath instead, point to your file starting with "C:\..."
Also open your solution in FileExplorer and grant permissions to asp.net for the file or any folders holding the file. I was able to download my file at that point.
In my opinion, uniqueidentifier
/ GUID is neither a varchar
nor an nvarchar
but a char(36)
. Therefore I use:
CAST(xyz AS char(36))
Actually
{
@synchronized(self) {
return [[myString retain] autorelease];
}
}
transforms directly into:
// needs #import <objc/objc-sync.h>
{
objc_sync_enter(self)
id retVal = [[myString retain] autorelease];
objc_sync_exit(self);
return retVal;
}
This API available since iOS 2.0 and imported using...
#import <objc/objc-sync.h>
These two ways also work, and will work for other commands that are not using find!
xargs -I '{}' rm '{}'
xargs -i rm '{}'
example use case:
find . -name "*.pyc" | xargs -i rm '{}'
will delete all pyc files under this directory even if the pyc files contain spaces.
You need to add the following line into your Apache config file:
AddType application/x-httpd-php .htm .html
You also need two other things:
Allow Overridding
In your_site.conf
file (e.g. under /etc/apache2/mods-available
in my case), add the following lines:
<Directory "<path_to_your_html_dir(in my case: /var/www/html)>">
AllowOverride All
</Directory>
Enable Rewrite Mod
Run this command on your machine:
sudo a2enmod rewrite
After any of those steps, you should restart apache:
sudo service apache2 restart
For restarting the animation of a gif image, you can use the code:
$('#img_gif').attr('src','file.gif?' + Math.random());
Here is a sample function: 'df.name = file` : Sixth line in the code below
def df_list():
filename_list = current_stage_files(PATH)
df_list = []
for file in filename_list:
df = pd.read_csv(PATH+file)
df.name = file
df_list.append(df)
return df_list
struct Bool {
int true;
int false;
}
int main() {
/* bool is a variable of data type – bool*/
struct Bool bool;
/*below I’m accessing struct members through variable –bool*/
bool = {1,0};
print("Student Name is: %s", bool.true);
return 0;
}
I have used a script but to make a join, maybe I can help you
string Email = String.Join(", ", Emails.Where(i => i.Email != "").Select(i => i.Email).Distinct());
In addition to Charles Bailey's answer, you might want to add the -C
parameter to the commands. Otherwise file renames count as lots of additions and removals (as many as the file has lines), even if the file content was not modified.
To illustrate, here is a commit with lots of files being moved around from one of my projects, when using the git log --oneline --shortstat
command:
9052459 Reorganized project structure
43 files changed, 1049 insertions(+), 1000 deletions(-)
And here the same commit using the git log --oneline --shortstat -C
command which detects file copies and renames:
9052459 Reorganized project structure
27 files changed, 134 insertions(+), 85 deletions(-)
In my opinion the latter gives a more realistic view of how much impact a person has had on the project, because renaming a file is a much smaller operation than writing the file from scratch.
Obviously , packet length * propagation delay = trasmission delay is wrong.
Let us assume that you have a packet which has 4 bits 1010.You have to send it from A to B.
For this scenario,Transmission delay is the time taken by the sender to place the packet on the link(Transmission medium).Because the bits(1010) has to be converted in to signals.So it takes some time.Note that here only the packet is placed.It is not moving to receiver.
Propagation delay is the time taken by a bit(Mostly MSB ,Here 1) to reach from sender(A) to receiver(B).
As @Peter Cullen answer mention, your script will meet browser timeout first. So its good idea to provide some log output, then flush()
, but connection have buffer and you'll not see anything unless much output provided. Here are code snippet what helps provide reliable log:
set_time_limit(0);
...
print "log message";
print "<!--"; print str_repeat (' ', 4000); print "-->"; flush();
print "log message";
print "<!--"; print str_repeat (' ', 4000); print "-->"; flush();
Confirm in API documentation http://api.jquery.com/jQuery.each/ say:
We can break the $.each() loop at a particular iteration by making the callback function return false. Returning non-false is the same as a continue statement in a for loop; it will skip immediately to the next iteration.
and this is my example http://jsfiddle.net/r6jqP/
(function($){
$('#go').on('click',function(){
var i=0,
all=0;
$('li').each(function(){
all++;
if($('#mytext').val()=='continue')return true;
i++;
if($('#mytext').val()==$(this).html()){
return false;
}
});
alert('Iterazione : '+i+' to '+all);
});
}(jQuery));
The COLLATE keyword specify what kind of character set and rules (order, confrontation rules) you are using for string values.
For example in your case you are using Latin rules with case insensitive (CI) and accent sensitive (AS)
You can refer to this Documentation
Select both the column you are sorting, and the column that the picture is in (I am assuming the picture is small compared to the cell, i.e. it is "in" the cell). Make sure that the object positioning property is set as "move but don't size with cells". Now if you do a sort, the pictures will move with the list being sorted.
Note - you must include the column with the picture in your range when you sort, and the picture must fit inside the cell.
The following VBA snippet will make sure all pictures in your spreadsheet have their "move and size" property set:
Sub moveAndSize()
Dim s As Shape
For Each s In ActiveSheet.Shapes
If s.Type = msoPicture Or s.Type = msoLinkedPicture Or s.Type = msoPlaceholder Then
s.Placement = xlMove
End If
Next
End Sub
If you want to make sure the picture continues to fit after you move it, you can use xlMoveAndSize
instead of xlMove
.
The best way I can think of.
$('#selectAll').change(function () {
$('.reportCheckbox').prop('checked', this.checked);
});
or
$checkBoxes = $(".checkBoxes");
$("#checkAll").change(function (e) {
$checkBoxes.prop("checked", this.checked);
});
or
<input onchange="toggleAll(this)">
function toggleAll(sender) {
$(".checkBoxes").prop("checked", sender.checked);
}
If you are thinking into using Tensorflow c++ api on a standalone package you probably will need tensorflow_cc.so ( There is also a c api version tensorflow.so ) to build the c++ version you can use:
bazel build -c opt //tensorflow:libtensorflow_cc.so
Note1: If you want to add intrinsics support you can add this flags as: --copt=-msse4.2 --copt=-mavx
Note2: If you are thinking into using OpenCV on your project as well, there is an issue when using both libs together (tensorflow issue) and you should use --config=monolithic
.
After building the library you need to add it to your project. To do that you can include this paths:
tensorflow
tensorflow/bazel-tensorflow/external/eigen_archive
tensorflow/bazel-tensorflow/external/protobuf_archive/src
tensorflow/bazel-genfiles
And link the library to your project:
tensorflow/bazel-bin/tensorflow/libtensorflow_framework.so (unused if you build with --config=monolithic)
tensorflow/bazel-bin/tensorflow/libtensorflow_cc.so
And when you are building your project you should also specify to your compiler that you are going to use c++11 standards.
Side Note: Paths relative to tensorflow version 1.5 (You may need to check if in your version anything changed).
Also this link helped me a lot into finding all this infos: link
From PEP 8 - Style Guide for Python Code:
The preferred way of wrapping long lines is by using Python's implied line continuation inside parentheses, brackets and braces. If necessary, you can add an extra pair of parentheses around an expression, but sometimes using a backslash looks better. Make sure to indent the continued line appropriately.
Example of implicit line continuation:
a = some_function(
'1' + '2' + '3' - '4')
On the topic of line-breaks around a binary operator, it goes on to say:-
For decades the recommended style was to break after binary operators. But this can hurt readability in two ways: the operators tend to get scattered across different columns on the screen, and each operator is moved away from its operand and onto the previous line.
In Python code, it is permissible to break before or after a binary operator, as long as the convention is consistent locally. For new code Knuth's style (line breaks before the operator) is suggested.
Example of explicit line continuation:
a = '1' \
+ '2' \
+ '3' \
- '4'
#include <string>
using std::string;
void myReplace(string& str,
const string& oldStr,
const string& newStr) {
if (oldStr.empty()) {
return;
}
for (size_t pos = 0; (pos = str.find(oldStr, pos)) != string::npos;) {
str.replace(pos, oldStr.length(), newStr);
pos += newStr.length();
}
}
The check for oldStr being empty is important. If for whatever reason that parameter is empty you will get stuck in an infinite loop.
But yeah use the tried and tested C++11 or Boost solution if you can.
As said before, VBA does not support Multithreading.
But you don't need to use C# or vbScript to start other VBA worker threads.
I use VBA to create VBA worker threads.
First copy the makro workbook for every thread you want to start.
Then you can start new Excel Instances (running in another Thread) simply by creating an instance of Excel.Application (to avoid errors i have to set the new application to visible).
To actually run some task in another thread i can then start a makro in the other application with parameters form the master workbook.
To return to the master workbook thread without waiting i simply use Application.OnTime in the worker thread (where i need it).
As semaphore i simply use a collection that is shared with all threads. For callbacks pass the master workbook to the worker thread. There the runMakroInOtherInstance Function can be reused to start a callback.
'Create new thread and return reference to workbook of worker thread
Public Function openNewInstance(ByVal fileName As String, Optional ByVal openVisible As Boolean = True) As Workbook
Dim newApp As New Excel.Application
ThisWorkbook.SaveCopyAs ThisWorkbook.Path & "\" & fileName
If openVisible Then newApp.Visible = True
Set openNewInstance = newApp.Workbooks.Open(ThisWorkbook.Path & "\" & fileName, False, False)
End Function
'Start macro in other instance and wait for return (OnTime used in target macro)
Public Sub runMakroInOtherInstance(ByRef otherWkb As Workbook, ByVal strMakro As String, ParamArray var() As Variant)
Dim makroName As String
makroName = "'" & otherWkb.Name & "'!" & strMakro
Select Case UBound(var)
Case -1:
otherWkb.Application.Run makroName
Case 0:
otherWkb.Application.Run makroName, var(0)
Case 1:
otherWkb.Application.Run makroName, var(0), var(1)
Case 2:
otherWkb.Application.Run makroName, var(0), var(1), var(2)
Case 3:
otherWkb.Application.Run makroName, var(0), var(1), var(2), var(3)
Case 4:
otherWkb.Application.Run makroName, var(0), var(1), var(2), var(3), var(4)
Case 5:
otherWkb.Application.Run makroName, var(0), var(1), var(2), var(3), var(4), var(5)
End Select
End Sub
Public Sub SYNCH_OR_WAIT()
On Error Resume Next
While masterBlocked.Count > 0
DoEvents
Wend
masterBlocked.Add "BLOCKED", ThisWorkbook.FullName
End Sub
Public Sub SYNCH_RELEASE()
On Error Resume Next
masterBlocked.Remove ThisWorkbook.FullName
End Sub
Sub runTaskParallel()
...
Dim controllerWkb As Workbook
Set controllerWkb = openNewInstance("controller.xlsm")
runMakroInOtherInstance controllerWkb, "CONTROLLER_LIST_FILES", ThisWorkbook, rootFold, masterBlocked
...
End Sub
In order to run natively, you will likely need to use Cygwin (which I cannot live without when using Windows). So right off the bat, +1 for Cygwin. Anything else would be uncivilized.
HOWEVER, that being said, I have recently begun using a combination of utilities to easily PORT Bash scripts to Windows so that my anti-Linux coworkers can easily run complex tasks that are better handled by GNU utilities.
I can usually port a Bash script to Batch in a very short time by opening the original script in one pane and writing a Batch file in the other pane. The tools that I use are as follows:
I prefer UnxUtils to GnuWin32 because of the fact that [someone please correct me if I'm wrong] GnuWin utils normally have to be installed, whereas UnxUtils are standalone binaries that just work out-of-the-box.
However, the CoreUtils do not include some familiar *NIX utilities such as cURL, which is also available for Windows (curl.haxx.se/download.html).
I create a folder for the projects, and always SET PATH=. in the .bat file so that no other commands other than the basic CMD shell commands are referenced (as well as the particular UnxUtils required in the project folder for the Batch script to function as expected).
Then I copy the needed CoreUtils .exe files into the project folder and reference them in the .bat file such as ".\curl.exe -s google.com", etc.
The Bat2Exe program is where the magic happens. Once your Batch file is complete and has been tested successfully, launch Bat2Exe.exe, and specify the path to the project folder. Bat2Exe will then create a Windows binary containing all of the files in that specific folder, and will use the first .bat that it comes across to use as the main executable. You can even include a .ico file to use as the icon for the final .exe file that is generated.
I have tried a few of these type of programs, and many of the generated binaries get flagged as malware, but the Bat2Exe version that I referenced works perfectly and the generated .exe files scan completely clean.
The resulting executable can be run interactively by double-clicking, or run from the command line with parameters, etc., just like a regular Batch file, except you will be able to utilize the functionality of many of the tools that you will normally use in Bash.
I realize this is getting quite long, but if I may digress a bit, I have also written a Batch script that I call PortaBashy that my coworkers can launch from a network share that contains a portable Cygwin installation. It then sets the %PATH% variable to the normal *NIX format (/usr/bin:/usr/sbin:/bin:/sbin), etc. and can either launch into the Bash shell itself or launch the more-powerful and pretty MinTTY terminal emulator.
There are always numerous ways to accomplish what you are trying to set out to do; it's just a matter of combining the right tools for the job, and many times it boils down to personal preference.
Jquery:
$("#WhereYouWantTheImageToAppear").html('<img src="./Animated.gif" />');
Generics only support object types, not primitives. Unlike C++ templates, generics don't involve code generatation and there is only one HashMap code regardless of the number of generic types of it you use.
Trove4J gets around this by pre-generating selected collections to use primitives and supports TCharIntHashMap which to can wrap to support the Map<Character, Integer>
if you need to.
TCharIntHashMap: An open addressed Map implementation for char keys and int values.
From iOS 8 we can use the new isOperatingSystemAtLeastVersion
method on NSProcessInfo
NSOperatingSystemVersion ios8_0_1 = (NSOperatingSystemVersion){8, 0, 1};
if ([[NSProcessInfo processInfo] isOperatingSystemAtLeastVersion:ios8_0_1]) {
// iOS 8.0.1 and above logic
} else {
// iOS 8.0.0 and below logic
}
Beware that this will crash on iOS 7, as the API didn't exist prior to iOS 8. If you're supporting iOS 7 and below, you can safely perform the check with
if ([NSProcessInfo instancesRespondToSelector:@selector(isOperatingSystemAtLeastVersion:)]) {
// conditionally check for any version >= iOS 8 using 'isOperatingSystemAtLeastVersion'
} else {
// we're on iOS 7 or below
}
For the sake of completeness, here's an alternative approach proposed by Apple itself in the iOS 7 UI Transition Guide, which involves checking the Foundation Framework version.
if (floor(NSFoundationVersionNumber) <= NSFoundationVersionNumber_iOS_6_1) {
// Load resources for iOS 6.1 or earlier
} else {
// Load resources for iOS 7 or later
}
I had same problem regarding that i.e A network-related or instance-specific error occurred while establishing a connection to SQL Server.
I was using SQL Server 2005 (.\sqlexpress)` and worked fine but suddenly services stopped and gave me error.
I solved it like this,
Start -> Search Box - > Sql Configuration Manager -> SQL Server 2005 Services >and just Change Your SQL Server (SQLEXPRESS) State to Running by right clicking that service Sate.
Even simpler:
body: new URLSearchParams({
'userName': '[email protected]',
'password': 'Password!',
'grant_type': 'password'
}),
Docs: https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/fetch
First of all scipy.misc.imread and PIL are no longer available. Instead use imageio library but you need to install Pillow for that as a dependancy
pip install Pillow imageio
Then use the following code to load the image and get the details about it.
import imageio
import tensorflow as tf
path = 'your_path_to_image' # '~/Downloads/image.png'
img = imageio.imread(path)
print(img.shape)
or
img_tf = tf.Variable(img)
print(img_tf.get_shape().as_list())
both work fine.
You have a typo in the import in your LoginComponent
's file
import { Component } from '@angular/Core';
It's lowercase c
, not uppercase
import { Component } from '@angular/core';
Maybe
if (inventory.contains("bread") && !inventory.contains("water"))
Or
if (inventory.contains("bread")) {
if (!inventory.contains("water")) {
// do something here
}
}
System.exit(0) by convention, a zero status code indicates successful termination.
System.exit(1) -It means termination unsuccessful due to some error
While reading from the file, your are not terminating your loop. So its read all the values and correctly throws EOFException on the next iteration of the read at line below:
price = in.readDouble();
If you read the documentation, it says:
Throws:
EOFException - if this input stream reaches the end before reading eight bytes.
IOException - the stream has been closed and the contained input stream does not support reading after close, or another I/O error occurs.
Put a proper termination condition in your while loop to resolve the issue e.g. below:
while(in.available() > 0) <--- if there are still bytes to read
The specific Date
constructor is deprecated, and Calendar
should be used instead.
The JavaDoc
for Date describes which constructors are deprecated and how to replace them using a Calendar
.
This way you always know what the size is, because a specific type is dedicated to sizes. The very own question shows that it can be an issue: is it an int
or an unsigned int
? Also, what is the magnitude (short
, int
, long
, etc.)?
Because there is a specific type assigned, you don't have to worry about the length or the signed-ness.
The actual definition can be found in the C++ Reference Library, which says:
Type:
size_t
(Unsigned integral type)Header:
<cstring>
size_t
corresponds to the integral data type returned by the language operatorsizeof
and is defined in the<cstring>
header file (among others) as an unsigned integral type.In
<cstring>
, it is used as the type of the parameternum
in the functionsmemchr
,memcmp
,memcpy
,memmove
,memset
,strncat
,strncmp
,strncpy
andstrxfrm
, which in all cases it is used to specify the maximum number of bytes or characters the function has to affect.It is also used as the return type for
strcspn
,strlen
,strspn
andstrxfrm
to return sizes and lengths.
If you set CURLINFO_HEADER_OUT
to true
, outgoing headers are available in the array returned by curl_getinfo()
, under request_header
key:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://foo.com/bar");
curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_BASIC);
curl_setopt($ch, CURLOPT_USERPWD, "someusername:secretpassword");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLINFO_HEADER_OUT, true);
curl_exec($ch);
$info = curl_getinfo($ch);
print_r($info['request_header']);
This will print:
GET /bar HTTP/1.1
Authorization: Basic c29tZXVzZXJuYW1lOnNlY3JldHBhc3N3b3Jk
Host: foo.com
Accept: */*
Note the auth details are base64-encoded:
echo base64_decode('c29tZXVzZXJuYW1lOnNlY3JldHBhc3N3b3Jk');
// prints: someusername:secretpassword
Also note that username and password need to be percent-encoded to escape any URL reserved characters (/
, ?
, &
, :
and so on) they might contain:
curl_setopt($ch, CURLOPT_USERPWD, urlencode($username).':'.urlencode($password));
This answer https://stackoverflow.com/a/33149413/6481542 got me 90% of the way with uploading large files to a development Django server, but I had to use setFixedLengthStreamingMode to make it worked. That requires setting the Content-Length before writing the content, thus requiring a fairly significant rewrite of the above answer. Here's my end result
public class MultipartLargeUtility {
private final String boundary;
private static final String LINE_FEED = "\r\n";
private HttpURLConnection httpConn;
private String charset;
private OutputStream outputStream;
private PrintWriter writer;
private final int maxBufferSize = 4096;
private long contentLength = 0;
private URL url;
private List<FormField> fields;
private List<FilePart> files;
private class FormField {
public String name;
public String value;
public FormField(String name, String value) {
this.name = name;
this.value = value;
}
}
private class FilePart {
public String fieldName;
public File uploadFile;
public FilePart(String fieldName, File uploadFile) {
this.fieldName = fieldName;
this.uploadFile = uploadFile;
}
}
/**
* This constructor initializes a new HTTP POST request with content type
* is set to multipart/form-data
*
* @param requestURL
* @param charset
* @throws IOException
*/
public MultipartLargeUtility(String requestURL, String charset, boolean requireCSRF)
throws IOException {
this.charset = charset;
// creates a unique boundary based on time stamp
boundary = "===" + System.currentTimeMillis() + "===";
url = new URL(requestURL);
fields = new ArrayList<>();
files = new ArrayList<>();
if (requireCSRF) {
getCSRF();
}
}
/**
* Adds a form field to the request
*
* @param name field name
* @param value field value
*/
public void addFormField(String name, String value)
throws UnsupportedEncodingException {
String fieldContent = "--" + boundary + LINE_FEED;
fieldContent += "Content-Disposition: form-data; name=\"" + name + "\"" + LINE_FEED;
fieldContent += "Content-Type: text/plain; charset=" + charset + LINE_FEED;
fieldContent += LINE_FEED;
fieldContent += value + LINE_FEED;
contentLength += fieldContent.getBytes(charset).length;
fields.add(new FormField(name, value));
}
/**
* Adds a upload file section to the request
*
* @param fieldName name attribute in <input type="file" name="..." />
* @param uploadFile a File to be uploaded
* @throws IOException
*/
public void addFilePart(String fieldName, File uploadFile)
throws IOException {
String fileName = uploadFile.getName();
String fieldContent = "--" + boundary + LINE_FEED;
fieldContent += "Content-Disposition: form-data; name=\"" + fieldName
+ "\"; filename=\"" + fileName + "\"" + LINE_FEED;
fieldContent += "Content-Type: "
+ URLConnection.guessContentTypeFromName(fileName) + LINE_FEED;
fieldContent += "Content-Transfer-Encoding: binary" + LINE_FEED;
fieldContent += LINE_FEED;
// file content would go here
fieldContent += LINE_FEED;
contentLength += fieldContent.getBytes(charset).length;
contentLength += uploadFile.length();
files.add(new FilePart(fieldName, uploadFile));
}
/**
* Adds a header field to the request.
*
* @param name - name of the header field
* @param value - value of the header field
*/
//public void addHeaderField(String name, String value) {
// writer.append(name + ": " + value).append(LINE_FEED);
// writer.flush();
//}
/**
* Completes the request and receives response from the server.
*
* @return a list of Strings as response in case the server returned
* status OK, otherwise an exception is thrown.
* @throws IOException
*/
public List<String> finish() throws IOException {
List<String> response = new ArrayList<String>();
String content = "--" + boundary + "--" + LINE_FEED;
contentLength += content.getBytes(charset).length;
if (!openConnection()) {
return response;
}
writeContent();
// checks server's status code first
int status = httpConn.getResponseCode();
if (status == HttpURLConnection.HTTP_OK) {
BufferedReader reader = new BufferedReader(new InputStreamReader(
httpConn.getInputStream()));
String line = null;
while ((line = reader.readLine()) != null) {
response.add(line);
}
reader.close();
httpConn.disconnect();
} else {
throw new IOException("Server returned non-OK status: " + status);
}
return response;
}
private boolean getCSRF()
throws IOException {
/// First, need to get CSRF token from server
/// Use GET request to get the token
CookieManager cookieManager = new CookieManager();
CookieHandler.setDefault(cookieManager);
HttpURLConnection conn = null;
conn = (HttpURLConnection) url.openConnection();
conn.setUseCaches(false); // Don't use a Cached Copy
conn.setRequestMethod("GET");
conn.setRequestProperty("Connection", "Keep-Alive");
conn.getContent();
conn.disconnect();
/// parse the returned object for the CSRF token
CookieStore cookieJar = cookieManager.getCookieStore();
List<HttpCookie> cookies = cookieJar.getCookies();
String csrf = null;
for (HttpCookie cookie : cookies) {
Log.d("cookie", "" + cookie);
if (cookie.getName().equals("csrftoken")) {
csrf = cookie.getValue();
break;
}
}
if (csrf == null) {
Log.d(TAG, "Unable to get CSRF");
return false;
}
Log.d(TAG, "Received cookie: " + csrf);
addFormField("csrfmiddlewaretoken", csrf);
return true;
}
private boolean openConnection()
throws IOException {
httpConn = (HttpURLConnection) url.openConnection();
httpConn.setUseCaches(false);
httpConn.setDoOutput(true); // indicates POST method
httpConn.setDoInput(true);
//httpConn.setRequestProperty("Accept-Encoding", "identity");
httpConn.setFixedLengthStreamingMode(contentLength);
httpConn.setRequestProperty("Connection", "Keep-Alive");
httpConn.setRequestProperty("Content-Type",
"multipart/form-data; boundary=" + boundary);
outputStream = new BufferedOutputStream(httpConn.getOutputStream());
writer = new PrintWriter(new OutputStreamWriter(outputStream, charset),
true);
return true;
}
private void writeContent()
throws IOException {
for (FormField field : fields) {
writer.append("--" + boundary).append(LINE_FEED);
writer.append("Content-Disposition: form-data; name=\"" + field.name + "\"")
.append(LINE_FEED);
writer.append("Content-Type: text/plain; charset=" + charset).append(
LINE_FEED);
writer.append(LINE_FEED);
writer.append(field.value).append(LINE_FEED);
writer.flush();
}
for (FilePart filePart : files) {
String fileName = filePart.uploadFile.getName();
writer.append("--" + boundary).append(LINE_FEED);
writer.append(
"Content-Disposition: form-data; name=\"" + filePart.fieldName
+ "\"; filename=\"" + fileName + "\"")
.append(LINE_FEED);
writer.append(
"Content-Type: "
+ URLConnection.guessContentTypeFromName(fileName))
.append(LINE_FEED);
writer.append("Content-Transfer-Encoding: binary").append(LINE_FEED);
writer.append(LINE_FEED);
writer.flush();
FileInputStream inputStream = new FileInputStream(filePart.uploadFile);
int bufferSize = Math.min(inputStream.available(), maxBufferSize);
byte[] buffer = new byte[bufferSize];
int bytesRead = -1;
while ((bytesRead = inputStream.read(buffer, 0, bufferSize)) != -1) {
outputStream.write(buffer, 0, bytesRead);
}
outputStream.flush();
inputStream.close();
writer.append(LINE_FEED);
writer.flush();
}
writer.append("--" + boundary + "--").append(LINE_FEED);
writer.close();
}
}
Usage is largely the same as in the above answer, but I've included CSRF support that Django uses by default with forms
boolean useCSRF = true;
MultipartLargeUtility multipart = new MultipartLargeUtility(url, "UTF-8",useCSRF);
multipart.addFormField("param1","value");
multipart.addFilePart("filefield",new File("/path/to/file"));
List<String> response = multipart.finish();
Log.w(TAG,"SERVER REPLIED:");
for(String line : response) {
Log.w(TAG, "Upload Files Response:::" + line);
}
This should do it:
[^\x00-\x7F]+
It matches any character which is not contained in the ASCII character set (0-127, i.e. 0x0 to 0x7F).
You can do the same thing with Unicode:
[^\u0000-\u007F]+
For unicode you can look at this 2 resources:
git log --pretty=email
git am
.Example: Extract history of file3
, file4
and file5
my_repo
+-- dirA
¦ +-- file1
¦ +-- file2
+-- dirB ^
¦ +-- subdir | To be moved
¦ ¦ +-- file3 | with history
¦ ¦ +-- file4 |
¦ +-- file5 v
+-- dirC
+-- file6
+-- file7
Set/clean the destination
export historydir=/tmp/mail/dir # Absolute path
rm -rf "$historydir" # Caution when cleaning the folder
Extract history of each file in email format
cd my_repo/dirB
find -name .git -prune -o -type d -o -exec bash -c 'mkdir -p "$historydir/${0%/*}" && git log --pretty=email -p --stat --reverse --full-index --binary -- "$0" > "$historydir/$0"' {} ';'
Unfortunately option --follow
or --find-copies-harder
cannot be combined with --reverse
. This is why history is cut when file is renamed (or when a parent directory is renamed).
Temporary history in email format:
/tmp/mail/dir
+-- subdir
¦ +-- file3
¦ +-- file4
+-- file5
Dan Bonachea suggests to invert the loops of the git log generation command in this first step: rather than running git log once per file, run it exactly once with a list of files on the command line and generate a single unified log. This way commits that modify multiple files remain a single commit in the result, and all the new commits maintain their original relative order. Note this also requires changes in second step below when rewriting filenames in the (now unified) log.
Suppose you want to move these three files in this other repo (can be the same repo).
my_other_repo
+-- dirF
¦ +-- file55
¦ +-- file56
+-- dirB # New tree
¦ +-- dirB1 # from subdir
¦ ¦ +-- file33 # from file3
¦ ¦ +-- file44 # from file4
¦ +-- dirB2 # new dir
¦ +-- file5 # from file5
+-- dirH
+-- file77
Therefore reorganize your files:
cd /tmp/mail/dir
mkdir -p dirB/dirB1
mv subdir/file3 dirB/dirB1/file33
mv subdir/file4 dirB/dirB1/file44
mkdir -p dirB/dirB2
mv file5 dirB/dirB2
Your temporary history is now:
/tmp/mail/dir
+-- dirB
+-- dirB1
¦ +-- file33
¦ +-- file44
+-- dirB2
+-- file5
Change also filenames within the history:
cd "$historydir"
find * -type f -exec bash -c 'sed "/^diff --git a\|^--- a\|^+++ b/s:\( [ab]\)/[^ ]*:\1/$0:g" -i "$0"' {} ';'
Your other repo is:
my_other_repo
+-- dirF
¦ +-- file55
¦ +-- file56
+-- dirH
+-- file77
Apply commits from temporary history files:
cd my_other_repo
find "$historydir" -type f -exec cat {} + | git am --committer-date-is-author-date
--committer-date-is-author-date
preserves the original commit time-stamps (Dan Bonachea's comment).
Your other repo is now:
my_other_repo
+-- dirF
¦ +-- file55
¦ +-- file56
+-- dirB
¦ +-- dirB1
¦ ¦ +-- file33
¦ ¦ +-- file44
¦ +-- dirB2
¦ +-- file5
+-- dirH
+-- file77
Use git status
to see amount of commits ready to be pushed :-)
To list the files having been renamed:
find -name .git -prune -o -exec git log --pretty=tformat:'' --numstat --follow {} ';' | grep '=>'
More customizations: You can complete the command git log
using options --find-copies-harder
or --reverse
. You can also remove the first two columns using cut -f3-
and grepping complete pattern '{.* => .*}'
.
find -name .git -prune -o -exec git log --pretty=tformat:'' --numstat --follow --find-copies-harder --reverse {} ';' | cut -f3- | grep '{.* => .*}'
if you want that not contains any of a-z and A-Z:
SELECT * FROM mytable WHERE NOT REGEXP_LIKE(column_1, '[A-Za-z]')
something like:
"98763045098" or "!%436%$7%$*#"
or other languages like persian, arabic and ... like this:
"???? ????"
Here is another example showing the use of gather
from tidyr
. You can select the columns to gather
either by removing them individually (as I do here), or by including the years you want explicitly.
Note that, to handle the commas (and X's added if check.names = FALSE
is not set), I am also using dplyr
's mutate with parse_number
from readr
to convert the text values back to numbers. These are all part of the tidyverse
and so can be loaded together with library(tidyverse)
wide %>%
gather(Year, Value, -Code, -Country) %>%
mutate(Year = parse_number(Year)
, Value = parse_number(Value))
Returns:
Code Country Year Value
1 AFG Afghanistan 1950 20249
2 ALB Albania 1950 8097
3 AFG Afghanistan 1951 21352
4 ALB Albania 1951 8986
5 AFG Afghanistan 1952 22532
6 ALB Albania 1952 10058
7 AFG Afghanistan 1953 23557
8 ALB Albania 1953 11123
9 AFG Afghanistan 1954 24555
10 ALB Albania 1954 12246
I think atompub REST API is a great example of a restful service. See the snippet below from the atompub spec:
POST /edit/ HTTP/1.1
Host: example.org
User-Agent: Thingio/1.0
Authorization: Basic ZGFmZnk6c2VjZXJldA==
Content-Type: application/atom+xml;type=entry
Content-Length: nnn
Slug: First Post
<?xml version="1.0"?>
<entry xmlns="http://www.w3.org/2005/Atom">
<title>Atom-Powered Robots Run Amok</title>
<id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id>
<updated>2003-12-13T18:30:02Z</updated>
<author><name>John Doe</name></author>
<content>Some text.</content>
</entry>
The server signals a successful creation with a status code of 201. The response includes a Location header indicating the Member Entry URI of the Atom Entry, and a representation of that Entry in the body of the response.
HTTP/1.1 201 Created
Date: Fri, 7 Oct 2005 17:17:11 GMT
Content-Length: nnn
Content-Type: application/atom+xml;type=entry;charset="utf-8"
Location: http://example.org/edit/first-post.atom
ETag: "c180de84f991g8"
<?xml version="1.0"?>
<entry xmlns="http://www.w3.org/2005/Atom">
<title>Atom-Powered Robots Run Amok</title>
<id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id>
<updated>2003-12-13T18:30:02Z</updated>
<author><name>John Doe</name></author>
<content>Some text.</content>
<link rel="edit"
href="http://example.org/edit/first-post.atom"/>
</entry>
The Entry created and returned by the Collection might not match the Entry POSTed by the client. A server MAY change the values of various elements in the Entry, such as the atom:id, atom:updated, and atom:author values, and MAY choose to remove or add other elements and attributes, or change element content and attribute values.
UPDATE: As of docker-compose file version 3.5:
I came across a similar problem and I solved it by adding a small change in one of my docker-compose.yml project.
For instance, we have two API's scoring
and ner
. Scoring
API needs to send a request to the ner
API for processing the input request. In order to do that they both are supposed to share the same network.
Note: Every container has its own network which is automatically created at the time of running the app inside docker. For example ner API network will be created like ner_default
and scoring API network will be named as scoring default
. This solution will work for version: '3'.
As in the above scenario, my scoring API wants to communicate with ner API then I will add the following lines. This means Whenever I create the container for ner API then it automatically added to the scoring_default network.
networks:
default:
external:
name: scoring_default
version: '3'
services:
ner:
container_name: "ner_api"
build: .
...
networks:
default:
external:
name: scoring_default
version: '3'
services:
api:
build: .
...
We can see this how the above containers are now a part of the same network called scoring_default
using the command:
docker inspect scoring_default
{
"Name": "scoring_default",
....
"Containers": {
"14a6...28bf": {
"Name": "ner_api",
"EndpointID": "83b7...d6291",
"MacAddress": "0....",
"IPv4Address": "0.0....",
"IPv6Address": ""
},
"7b32...90d1": {
"Name": "scoring_api",
"EndpointID": "311...280d",
"MacAddress": "0.....3",
"IPv4Address": "1...0",
"IPv6Address": ""
},
...
}
You application of js and php in totally invalid.
You have to understand a fact that JS runs on clientside, once the page loads it does not care, whether the page was a php page or jsp or asp. It executes of DOM and is related to it only.
However you can do something like this
var newLocation = "<?php echo $newlocation; ?>";
window.location = newLocation;
You see, by the time the script is loaded, the above code renders into different form, something like this
var newLocation = "your/redirecting/page.php";
window.location = newLocation;
Like above, there are many possibilities of php and js fusions and one you are doing is not one of them.
Do not forget to include Reference to
Microsoft Excel 12.0/11.0 object Library
using Excel = Microsoft.Office.Interop.Excel;
// Include this Namespace
Microsoft.Office.Interop.Excel.Application xlApp = null;
Excel.Workbook xlWorkbook = null;
Excel.Sheets xlSheets = null;
Excel.Worksheet xlNewSheet = null;
string worksheetName ="Sheet_Name";
object readOnly1 = false;
object isVisible = true;
object missing = System.Reflection.Missing.Value;
try
{
xlApp = new Microsoft.Office.Interop.Excel.Application();
if (xlApp == null)
return;
// Uncomment the line below if you want to see what's happening in Excel
// xlApp.Visible = true;
xlWorkbook = xlApp.Workbooks.Open(@"C:\Book1.xls", missing, readOnly1, missing, missing, missing, missing, missing, missing, missing, missing, isVisible, missing, missing, missing);
xlSheets = (Excel.Sheets)xlWorkbook.Sheets;
// The first argument below inserts the new worksheet as the first one
xlNewSheet = (Excel.Worksheet)xlSheets.Add(xlSheets[1], Type.Missing, Type.Missing, Type.Missing);
xlNewSheet.Name = worksheetName;
xlWorkbook.Save();
xlWorkbook.Close(Type.Missing, Type.Missing, Type.Missing);
xlApp.Quit();
}
finally
{
System.Runtime.InteropServices.Marshal.ReleaseComObject(xlNewSheet);
System.Runtime.InteropServices.Marshal.ReleaseComObject(xlSheets);
System.Runtime.InteropServices.Marshal.ReleaseComObject(xlWorkbook);
System.Runtime.InteropServices.Marshal.ReleaseComObject(xlApp);
//xlApp = null;
}
Nothing suggested above worked for me (including the NODE_PATH variable). I created a sym link of "node_modules" from my local folder to the global AppData(eg below) and it worked like charm.
C:\Users\mmoinuddin\AppData\Roaming\npm>mklink /D node_modules c:\essportreact\day1\node_modules
symbolic link created for node_modules <<===>> c:\essportreact\day1\node_modules
C:\essportreact\day1>webpack
Hash: 2a82a67f90f9aa05ab4a
Version: webpack 1.15.0
Use this
long startTime = System.currentTimeMillis();
long elapsedTime = 0L.
while (elapsedTime < 2*60*1000) {
//perform db poll/check
elapsedTime = (new Date()).getTime() - startTime;
}
//Throw your exception
You also need add these lines in PostgreSQL and restart the server:
log_directory = 'pg_log'
log_filename = 'postgresql-dateformat.log'
log_statement = 'all'
logging_collector = on
Creat XML called border.xml in drawable folder as below :
<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
<item>
<shape android:shape="rectangle">
<solid android:color="#FF0000" />
</shape>
</item>
<item android:left="5dp" android:right="5dp" android:top="5dp" >
<shape android:shape="rectangle">
<solid android:color="#000000" />
</shape>
</item>
</layer-list>
then add this to linear layout as backgound as this:
android:background="@drawable/border"
false == 0
and true = !false
i.e. anything that is not zero and can be converted to a boolean is not false
, thus it must be true
.
Some examples to clarify:
if(0) // false
if(1) // true
if(2) // true
if(0 == false) // true
if(0 == true) // false
if(1 == false) // false
if(1 == true) // true
if(2 == false) // false
if(2 == true) // false
cout << false // 0
cout << true // 1
true
evaluates to 1
, but any int
that is not false
(i.e. 0
) evaluates to true
but is not equal to true
since it isn't equal to 1
.
The other posters are correct you cannot connect to MySQL directly from javascript. This is because JavaScript is at client side & mysql is server side.
So your best bet is to use ajax to call a handler as quoted above if you can let us know what language your project is in we can better help you ie php/java/.net
If you project is using php then the example from Merlyn is a good place to start, I would personally use jquery.ajax() to cut down you code and have a better chance of less cross browser issues.
You are wrong. There is one way to kill an application. In a class with super class Application, we use some field, for example, killApp
. When we start the splash screen (first activity) in onResume()
, we set a parameter for false for field killApp
. In every activity which we have when onResume()
is called in the end, we call something like that:
if(AppClass.killApp())
finish();
Every activity which is getting to the screen have to call onResume()
. When it is called, we have to check if our field killApp
is true. If it is true, current activities call finish()
. To invoke the full action, we use the next construction. For example, in the action for a button:
AppClass.setkillApplication(true);
finish();
return;
You can install it by bower:
Node.js npm install underscore
Meteor.js meteor add underscore
Require.js require(["underscore"], ...
Bower bower install underscore
Component component install jashkenas/underscore
Here's the link to the oficial page http://underscorejs.org/
As hinted at by this post Error in chrome: Content-Type is not allowed by Access-Control-Allow-Headers just add the additional header to your web.config like so...
<httpProtocol>
<customHeaders>
<add name="Access-Control-Allow-Origin" value="*" />
<add name="Access-Control-Allow-Headers" value="Origin, X-Requested-With, Content-Type, Accept" />
</customHeaders>
</httpProtocol>
I had a similar problem to solve so I came across this question. I did a performance comparison of Andrew Clark's and narayan's answer which I would like to share.
The primary difference between two answers is how they iterate over inner lists. One of them uses builtin map, while other is using list comprehension. Map function has slight performance advantage to its equivalent list comprehension if it doesn't require the use lambdas. So in context of this question map
should perform slightly better than list comprehension.
Lets do a performance benchmark to see if it is actually true. I used python version 3.5.0 to perform all these tests. In first set of tests I would like to keep elements per list to be 10 and vary number of lists from 10-100,000
>>> python -m timeit "[list(map(float,k)) for k in [list(range(0,10))]*10]"
>>> 100000 loops, best of 3: 15.2 usec per loop
>>> python -m timeit "[[float(y) for y in x] for x in [list(range(0,10))]*10]"
>>> 10000 loops, best of 3: 19.6 usec per loop
>>> python -m timeit "[list(map(float,k)) for k in [list(range(0,10))]*100]"
>>> 100000 loops, best of 3: 15.2 usec per loop
>>> python -m timeit "[[float(y) for y in x] for x in [list(range(0,10))]*100]"
>>> 10000 loops, best of 3: 19.6 usec per loop
>>> python -m timeit "[list(map(float,k)) for k in [list(range(0,10))]*1000]"
>>> 1000 loops, best of 3: 1.43 msec per loop
>>> python -m timeit "[[float(y) for y in x] for x in [list(range(0,10))]*1000]"
>>> 100 loops, best of 3: 1.91 msec per loop
>>> python -m timeit "[list(map(float,k)) for k in [list(range(0,10))]*10000]"
>>> 100 loops, best of 3: 13.6 msec per loop
>>> python -m timeit "[[float(y) for y in x] for x in [list(range(0,10))]*10000]"
>>> 10 loops, best of 3: 19.1 msec per loop
>>> python -m timeit "[list(map(float,k)) for k in [list(range(0,10))]*100000]"
>>> 10 loops, best of 3: 164 msec per loop
>>> python -m timeit "[[float(y) for y in x] for x in [list(range(0,10))]*100000]"
>>> 10 loops, best of 3: 216 msec per loop
In the next set of tests I would like to raise number of elements per lists to 100.
>>> python -m timeit "[list(map(float,k)) for k in [list(range(0,100))]*10]"
>>> 10000 loops, best of 3: 110 usec per loop
>>> python -m timeit "[[float(y) for y in x] for x in [list(range(0,100))]*10]"
>>> 10000 loops, best of 3: 151 usec per loop
>>> python -m timeit "[list(map(float,k)) for k in [list(range(0,100))]*100]"
>>> 1000 loops, best of 3: 1.11 msec per loop
>>> python -m timeit "[[float(y) for y in x] for x in [list(range(0,100))]*100]"
>>> 1000 loops, best of 3: 1.5 msec per loop
>>> python -m timeit "[list(map(float,k)) for k in [list(range(0,100))]*1000]"
>>> 100 loops, best of 3: 11.2 msec per loop
>>> python -m timeit "[[float(y) for y in x] for x in [list(range(0,100))]*1000]"
>>> 100 loops, best of 3: 16.7 msec per loop
>>> python -m timeit "[list(map(float,k)) for k in [list(range(0,100))]*10000]"
>>> 10 loops, best of 3: 134 msec per loop
>>> python -m timeit "[[float(y) for y in x] for x in [list(range(0,100))]*10000]"
>>> 10 loops, best of 3: 171 msec per loop
>>> python -m timeit "[list(map(float,k)) for k in [list(range(0,100))]*100000]"
>>> 10 loops, best of 3: 1.32 sec per loop
>>> python -m timeit "[[float(y) for y in x] for x in [list(range(0,100))]*100000]"
>>> 10 loops, best of 3: 1.7 sec per loop
Lets take a brave step and modify the number of elements in lists to be 1000
>>> python -m timeit "[list(map(float,k)) for k in [list(range(0,1000))]*10]"
>>> 1000 loops, best of 3: 800 usec per loop
>>> python -m timeit "[[float(y) for y in x] for x in [list(range(0,1000))]*10]"
>>> 1000 loops, best of 3: 1.16 msec per loop
>>> python -m timeit "[list(map(float,k)) for k in [list(range(0,1000))]*100]"
>>> 100 loops, best of 3: 8.26 msec per loop
>>> python -m timeit "[[float(y) for y in x] for x in [list(range(0,1000))]*100]"
>>> 100 loops, best of 3: 11.7 msec per loop
>>> python -m timeit "[list(map(float,k)) for k in [list(range(0,1000))]*1000]"
>>> 10 loops, best of 3: 83.8 msec per loop
>>> python -m timeit "[[float(y) for y in x] for x in [list(range(0,1000))]*1000]"
>>> 10 loops, best of 3: 118 msec per loop
>>> python -m timeit "[list(map(float,k)) for k in [list(range(0,1000))]*10000]"
>>> 10 loops, best of 3: 868 msec per loop
>>> python -m timeit "[[float(y) for y in x] for x in [list(range(0,1000))]*10000]"
>>> 10 loops, best of 3: 1.23 sec per loop
>>> python -m timeit "[list(map(float,k)) for k in [list(range(0,1000))]*100000]"
>>> 10 loops, best of 3: 9.2 sec per loop
>>> python -m timeit "[[float(y) for y in x] for x in [list(range(0,1000))]*100000]"
>>> 10 loops, best of 3: 12.7 sec per loop
From these test we can conclude that map
has a performance benefit over list comprehension in this case. This is also applicable if you are trying to cast to either int
or str
. For small number of lists with less elements per list, the difference is negligible. For larger lists with more elements per list one might like to use map
instead of list comprehension, but it totally depends on application needs.
However I personally find list comprehension to be more readable and idiomatic than map
. It is a de-facto standard in python. Usually people are more proficient and comfortable(specially beginner) in using list comprehension than map
.
0.00 is actually 0. If you need to have the 0.00 when you echo, simply use number_format this way:
number_format($number, 2);
#include <tr1/unordered_map>
will get you next-standard C++ unique hash container. Usage:
std::tr1::unordered_map<std::string,int> my_map;
my_map["answer"] = 42;
printf( "The answer to life and everything is: %d\n", my_map["answer"] );
df_buy['BUY'] = df_buy['BUY'].astype('float')
df_buy['BUY'] = ['€ {:,.2f}'.format(i) for i in list(df_buy['BUY'])]
You're trying to use key functions with lambda functions.
Python and other languages like C# or F# use lambda functions.
Also, when it comes to key functions and according to the documentation
Both list.sort() and sorted() have a key parameter to specify a function to be called on each list element prior to making comparisons.
...
The value of the key parameter should be a function that takes a single argument and returns a key to use for sorting purposes. This technique is fast because the key function is called exactly once for each input record.
So, key functions have a parameter key and it can indeed receive a lambda function.
In Real Python there's a nice example of its usage. Let's say you have the following list
ids = ['id1', 'id100', 'id2', 'id22', 'id3', 'id30']
and want to sort through its "integers". Then, you'd do something like
sorted_ids = sorted(ids, key=lambda x: int(x[2:])) # Integer sort
and printing it would give
['id1', 'id2', 'id3', 'id22', 'id30', 'id100']
In your particular case, you're only missing to write key=
before lambda. So, you'd want to use the following
a = sorted(a, key=lambda x: x.modified, reverse=True)
If one is using storyboard and overriding UICollectionViewDelegateFlowLayout then in swift 5 and Xcode 11 also set Estimate size to None
In java 8, there is a method with which you can do this: toCharArray():
String k = "abcdef";
char[] x = k.toCharArray();
This results to the following array:
[a,b,c,d,e,f]
I've run into this a few times when working with various APIs. Often a quick fix is to add "&callback=?" to the end of a string. Sometimes the ampersand has to be a character code, and sometimes a "?": "?callback=?" (see Forecast.io API Usage with jQuery)
I use buffers like tabs, using the BufExplorer plugin and a few macros:
" CTRL+b opens the buffer list
map <C-b> <esc>:BufExplorer<cr>
" gz in command mode closes the current buffer
map gz :bdelete<cr>
" g[bB] in command mode switch to the next/prev. buffer
map gb :bnext<cr>
map gB :bprev<cr>
With BufExplorer you don't have a tab bar at the top, but on the other hand it saves space on your screen, plus you can have an infinite number of files/buffers open and the buffer list is searchable...
schema : database : table :: floor plan : house : room
For me this worked
Add these attributes to your EditText
android:focusable="true"
android:focusableInTouchMode="true"
After this in your code you can simply write
editText.clearFocus()
int *pointer[280];
//Creates 280 pointers of type int.
In 32 bit os, 4 bytes for each pointer. so 4 * 280 = 1120 bytes.
int (*pointer)[100][280];
// Creates only one pointer which is used to point an array of [100][280] ints.
Here only 4 bytes.
Coming to your question, int (*pointer)[280];
and int (*pointer)[100][280];
are different though it points to same 2D array of [100][280].
Because if int (*pointer)[280];
is incremented, then it will points to next 1D array, but where as int (*pointer)[100][280];
crosses the whole 2D array and points to next byte. Accessing that byte may cause problem if that memory doen't belongs to your process.
You have to put all your assets in app/public
folder, and to access them from your views you can use asset()
helper method.
Ex. you can retrieve assets/images/image.png
in your view as following:
<img src="{{asset('assets/images/image.png')}}">
You can use the below effect to animate, you can change the values as per your requirements
$("#myElem").fadeIn('slow').animate({opacity: 1.0}, 1500).effect("pulsate", { times: 2 }, 800).fadeOut('slow');
From docs TWIG docs:
Unlike in PHP, it's not possible to break or continue in a loop.
But still:
You can however filter the sequence during iteration which allows you to skip items.
Example 1 (for huge lists you can filter posts using slice, slice(start, length)
):
{% for post in posts|slice(0,10) %}
<h2>{{ post.heading }}</h2>
{% endfor %}
Example 2:
{% for post in posts if post.id < 10 %}
<h2>{{ post.heading }}</h2>
{% endfor %}
You can even use own TWIG filters for more complexed conditions, like:
{% for post in posts|onlySuperPosts %}
<h2>{{ post.heading }}</h2>
{% endfor %}
ObservableCollection < T > has a constructor overload which takes IEnumerable < T >
Example for a List of int
:
ObservableCollection<int> myCollection = new ObservableCollection<int>(myList);
One more example for a List of ObjectA
:
ObservableCollection<ObjectA> myCollection = new ObservableCollection<ObjectA>(myList as List<ObjectA>);
student_user = User.objects.get(id=user_id)
available_subjects = Subject.objects.exclude(subject_grade__student__user=student_user) # My ans
enrolled_subjects = SubjectGrade.objects.filter(student__user=student_user)
context.update({'available_subjects': available_subjects, 'student_user': student_user,
'request':request, 'enrolled_subjects': enrolled_subjects})
In my application above, i assume that once a student is enrolled, a subject SubjectGrade instance will be created that contains the subject enrolled and the student himself/herself.
Subject and Student User model is a Foreign Key to the SubjectGrade Model.
In "available_subjects", i excluded all the subjects that are already enrolled by the current student_user by checking all subjectgrade instance that has "student" attribute as the current student_user
PS. Apologies in Advance if you can't still understand because of my explanation. This is the best explanation i Can Provide. Thank you so much
webbot
even works web pages which have dynamically changing id and classnames and has more methods and features than selenium or mechanize.
Here's a snippet :)
from webbot import Browser
web = Browser()
web.go_to('google.com')
web.click('Sign in')
web.type('[email protected]' , into='Email')
web.click('NEXT' , tag='span')
web.type('mypassword' , into='Password' , id='passwordFieldId') # specific selection
web.click('NEXT' , tag='span') # you are logged in ^_^
The docs are also pretty straight forward and simple to use : https://webbot.readthedocs.io
The solution is to set the default value in your .elem. But this annimation work fine with -moz but not yet implement in -webkit
Look at the fiddle I updated from yours : http://jsfiddle.net/DoubleYo/4Vz63/1648/
It works fine with Firefox but not with Chrome
.elem{_x000D_
position: absolute;_x000D_
top: 40px;_x000D_
left: 40px;_x000D_
width: 0; _x000D_
height: 0;_x000D_
border-style: solid;_x000D_
border-width: 75px;_x000D_
border-color: red blue green orange;_x000D_
transition-property: transform;_x000D_
transition-duration: 1s;_x000D_
}_x000D_
.elem:hover {_x000D_
animation-name: rotate; _x000D_
animation-duration: 2s; _x000D_
animation-iteration-count: infinite;_x000D_
animation-timing-function: linear;_x000D_
}_x000D_
_x000D_
@keyframes rotate {_x000D_
from {transform: rotate(0deg);}_x000D_
to {transform: rotate(360deg);}_x000D_
}
_x000D_
<div class="elem"></div>
_x000D_
for i in count:
means for i in 7:
, which won't work. The bit after the in
should be of an iterable type, not a number. Try this:
for i in range(count):
I agree with sunetos that you'll have to use the $.ajax function in order to pass request headers. In order to do that, you'll have to write a function for the beforeSend event handler, which is one of the $.ajax() options. Here's a quick sample on how to do that:
<html>
<head>
<script src="http://code.jquery.com/jquery-1.4.2.min.js"></script>
<script type="text/javascript">
$(document).ready(function() {
$.ajax({
url: 'service.svc/Request',
type: 'GET',
dataType: 'json',
success: function() { alert('hello!'); },
error: function() { alert('boo!'); },
beforeSend: setHeader
});
});
function setHeader(xhr) {
xhr.setRequestHeader('securityCode', 'Foo');
xhr.setRequestHeader('passkey', 'Bar');
}
</script>
</head>
<body>
<h1>Some Text</h1>
</body>
</html>
If you run the code above and watch the traffic in a tool like Fiddler, you'll see two requests headers passed in:
The setHeader function could also be inline in the $.ajax options, but I wanted to call it out.
Hope this helps!
This is simple if you only use Selenium WebDriver, and forget the usage of Selenium-RC. I'd go like this.
WebDriver driver = new FirefoxDriver();
WebElement email = driver.findElement(By.id("email"));
email.sendKeys("[email protected]");
The reason for NullPointerException
however is that your variable driver
has never been started, you start FirefoxDriver
in a variable wb
thas is never being used.
select * from person where DATE(dob) between '2011-01-01' and '2011-01-31'
Surprisingly such conversions are solutions to many problems in MySQL.
In server proxy set like that
location / {
proxy_pass http://ip:80;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
}
In server php set like that
server {
client_body_timeout 120;
location = /index.php {
#include fastcgi.conf; //example
#fastcgi_pass unix:/run/php/php7.3-fpm.sock;//example veriosn
fastcgi_read_timeout 120s;
}
}
2011-01-15
= 2011-16
= 1995
. This is then being implicitly converted from an integer to a date, giving you the 1995th day, starting from 1st Jan 1900.
You need to use SET @test = '2011-02-15'
Following up on Niketya's answer, there's a good explanation of Pivot Tables here: http://peltiertech.com/WordPress/grouping-by-date-in-a-pivot-table/
For Excel 2007 you'd create the Pivot Table, make your Date column a Row Label, your Amount column a value. You'd then right click on one of the row labels (ie a date), right click and select Group. You'd then get the option to group by day, month, etc.
Personally that's the way I'd go.
If you prefer formulae, Smandoli's answer would get you most of the way there. To be able to use Sumif by day, you'd add a column with a formula like:
=DATE(YEAR(C1), MONTH(C1), DAY(C1))
where column C contains your datetimes.
You can then use this in your sumif.
Try var
var s = new List<string> { "a", "bk", "ca", "d" };
The pattern rules for glob are not regular expressions. Instead, they follow standard Unix path expansion rules. There are only a few special characters: two different wild-cards, and character ranges are supported [from pymotw: glob – Filename pattern matching].
So you can exclude some files with patterns.
For example to exclude manifests files (files starting with _
) with glob, you can use:
files = glob.glob('files_path/[!_]*')
I'd like to post some important information about undefined
, which beginners might not know.
Look at the following code:
/*
* Consider there is no code above.
* The browser runs these lines only.
*/
// var a;
// --- commented out to point that we've forgotten to declare `a` variable
if ( a === undefined ) {
alert('Not defined');
} else {
alert('Defined: ' + a);
}
alert('Doing important job below');
If you run this code, where variable a
HAS NEVER BEEN DECLARED using var
,
you will get an ERROR EXCEPTION and surprisingly see no alerts at all.
Instead of 'Doing important job below', your script will TERMINATE UNEXPECTEDLY, throwing unhandled exception on the very first line.
Here is the only bulletproof way to check for undefined
using typeof
keyword, which was designed just for such purpose:
/*
* Correct and safe way of checking for `undefined`:
*/
if ( typeof a === 'undefined' ) {
alert(
'The variable is not declared in this scope, \n' +
'or you are pointing to unexisting property, \n' +
'or no value has been set yet to the variable, \n' +
'or the value set was `undefined`. \n' +
'(two last cases are equivalent, don\'t worry if it blows out your mind.'
);
}
/*
* Use `typeof` for checking things like that
*/
This method works in all possible cases.
The last argument to use it is that undefined
can be potentially overwritten in earlier versions of Javascript:
/* @ Trollface @ */
undefined = 2;
/* Happy debuging! */
Hope I was clear enough.
To get all '.txt' file names inside 'dataPath' folder as a list in a Pythonic way:
from os import listdir
from os.path import isfile, join
path = "/dataPath/"
onlyTxtFiles = [f for f in listdir(path) if isfile(join(path, f)) and f.endswith(".txt")]
print onlyTxtFiles
Microsoft's GUID
's textual representation can be in the form of a UUID being surrounded by two curly braces {}
.
Here is an SQL request to do that:
select column_name, count(1)
from table
group by column_name
having count (column_name) > 1;
This will work although when embedding PHP in HTML it is better practice to use the following form:
<table>
<?php foreach($array as $key=>$value): ?>
<tr>
<td><?= $key; ?></td>
</tr>
<?php endforeach; ?>
</table>
You can find the doc for the alternative syntax on PHP.net
#config/app.php
'aliases' => [
...
'Input' => Illuminate\Support\Facades\Input::class,
...
],
#Use Controller file
use Illuminate\Support\Facades\Input;
==OR==
use Input;
Read full example: https://devnote.in/laravel-class-input-not-found
There are a several things you need to address to get it working
@ouput
its @code
AddWithValue
since its not supposed to have a value just you Add
.ExecuteNonQuery
if you're not returning rows Try
SqlParameter output = new SqlParameter("@code", SqlDbType.Int);
output.Direction = ParameterDirection.Output;
cmd.Parameters.Add(output);
cmd.ExecuteNonQuery();
MessageBox.Show(output.Value.ToString());
Yes. First, construct a File
representing the image path:
File file = new File(a);
If you're starting from a relative path:
file = new File(file.getAbsolutePath());
Then, get the parent:
String dir = file.getParent();
Or, if you want the directory as a File
object,
File dirAsFile = file.getParentFile();
You can use top_n
df %>% group_by(A, B) %>% top_n(n=1)
This will rank by the last column (value
) and return the top n=1
rows.
Currently, you can't change the this default without causing an error (See https://github.com/hadley/dplyr/issues/426)
Tablets supports tvdpi and for that scaling factor is 1.33 times dimensions of medium dpi
ldpi | mdpi | tvdpi | hdpi | xhdpi | xxhdpi | xxxhdpi
0.75 | 1 | 1.33 | 1.5 | 2 | 3 | 4
This means that if you generate a 400x400 image for xxxhdpi devices, you should generate the same resource in 300x300 for xxhdpi, 200x200 for xhdpi, 133x133 for tvdpi, 150x150 for hdpi, 100x100 for mdpi, and 75x75 for ldpi devices
You cannot TRUNCATE
a table that has FK constraints applied on it (TRUNCATE
is not the same as DELETE
).
To work around this, use either of these solutions. Both present risks of damaging the data integrity.
Option 1:
TRUNCATE
Option 2: suggested by user447951 in their answer
SET FOREIGN_KEY_CHECKS = 0;
TRUNCATE table $table_name;
SET FOREIGN_KEY_CHECKS = 1;
Works in windows 10, essentially identical to vanuan's answer, but cleaner (taken from somewhere, can't remember where..):
import sys
for p in sys.path:
print(p)
sudo apt-get install python-dev # for python2.x installs
sudo apt-get install python3-dev # for python3.x installs
It will install any missing headers. It solved my issue
public void printsOccurences(List<String> words)
{
var selectQuery =
from word in words
group word by word into g
select new {Word = g.Key, Count = g.Count()};
foreach(var word in selectQuery)
Console.WriteLine($"{word.Word}: {word.Count}");*emphasized text*
}
Anonymous ftp logins are usually the username 'anonymous' with the user's email address as the password. Some servers parse the password to ensure it looks like an email address.
User: anonymous
Password: [email protected]
Download url to bytes and convert bytes into stream:
using (var client = new WebClient())
{
var content = client.DownloadData(url);
using (var stream = new MemoryStream(content))
{
...
}
}