I know this is an older question, but I wanted to post an answer for users with the same question:
curl -H 'Cache-Control: no-cache' http://www.example.com
This curl command servers in its header request to return non-cached data from the web server.
Ensures the path to Xcode.app bundle is without space or strange characters. I have Xcode installed in ~/Downloads/Last Dev Tools/ folder, so with spaces and renaming the folder to LastDevTools fixed this (after resetting xcode-select -p though)
I went into wp-config/ and deleted the object-cache.php and advanced-cache.php and it worked fine for me.
Memcached client library was just recently released as stable. It is being used by digg ( was developed for digg by Andrei Zmievski, now no longer with digg) and implements much more of the memcached protocol than the older memcache client. The most important features that memcached has are:
All of this points were enough for me to switch to the newest client, and can tell you that it works like a charm. There is that external dependency on the libmemcached library, but have managed to install it nonetheless on Ubuntu and Mac OSX, so no problems there so far.
If you decide to update to the newer library, I suggest you update to the latest server version as well as it has some nice features as well. You will need to install libevent for it to compile, but on Ubuntu it wasn't much trouble.
I haven't seen any frameworks pick up the new memcached client thus far (although I don't keep track of them), but I presume Zend will get on board shortly.
Zend Framework 2 has an adapter for Memcached which can be found here
They are not identical. Memcache is older but it has some limitations. I was using just fine in my application until I realized you can't store literal FALSE
in cache. Value FALSE
returned from the cache is the same as FALSE returned when a value is not found in the cache. There is no way to check which is which. Memcached has additional method (among others) Memcached::getResultCode
that will tell you whether key was found.
Because of this limitation I switched to storing empty arrays instead of FALSE
in cache. I am still using Memcache, but I just wanted to put this info out there for people who are deciding.
If you have PHP & PHP-memcached installed, you can run
$ php -r '$c = new Memcached(); $c->addServer("localhost", 11211); var_dump( $c->getAllKeys() );'
I would advise you to use pylibmc
instead.
It can act as a drop-in replacement of python-memcache, but a lot faster(as it's written in C). And you can find handy documentation for it here.
And to the question, as pylibmc just acts as a drop-in replacement, you can still refer to documentations of pylibmc for your python-memcache programming.
If you have an older version of memcached and need a script to wrap memcached as a service, here it is: Memcached Service Script
Another bonus is that it can be very clear how memcache is going to behave in a caching scenario, while redis is generally used as a persistent datastore, though it can be configured to behave just like memcached aka evicting Least Recently Used items when it reaches max capacity.
Some apps I've worked on use both just to make it clear how we intend the data to behave - stuff in memcache, we write code to handle the cases where it isn't there - stuff in redis, we rely on it being there.
Other than that Redis is generally regarded as superior for most use cases being more feature-rich and thus flexible.
Before you phpize, make sure to update your path ($PS1) to point to the new PHP! phpize uses your environment, and if you still have vestiges of your old PHP in your path or other parts of the environment, things will get hairy!
Syntax is Shown below:
"your condition"? "step if true":"step if condition fails"
This is because you have specified the form method as GET
Change code in the view to this:
using (@Html.BeginForm("myMethod", "Home", FormMethod.Post, new { id = @item.JobId })){
}
Quote from Android Docs:
If you need to format your strings using
String.format(String, Object...)
, then you can do so by putting your format arguments in the string resource. For example, with the following resource:<string name="welcome_messages">Hello, %1$s! You have %2$d new messages.</string>
In this example, the format string has two arguments:
%1$s
is a string and%2$d
is a decimal number. You can format the string with arguments from your application like this:Resources res = getResources(); String text = String.format(res.getString(R.string.welcome_messages), username, mailCount);
You seek the all-powerful *?
From the docs, Greedy versus Non-Greedy
the non-greedy qualifiers
*?
,+?
,??
, or{m,n}?
[...] match as little text as possible.
Just in case somebody still lands here looking for an answer. It can be done using plain PHP. An easier way is to reverse-json the object.
function objectToArray(&$object)
{
return @json_decode(json_encode($object), true);
}
This is what I use when it comes to some basic url param additions or updates on the server-side like Node.js.
CoffeScript:
###_x000D_
@method addUrlParam Adds parameter to a given url. If the parameter already exists in the url is being replaced._x000D_
@param {string} url_x000D_
@param {string} key Parameter's key_x000D_
@param {string} value Parameter's value_x000D_
@returns {string} new url containing the parameter_x000D_
###_x000D_
addUrlParam = (url, key, value) ->_x000D_
newParam = key+"="+value_x000D_
result = url.replace(new RegExp('(&|\\?)' + key + '=[^\&|#]*'), '$1' + newParam)_x000D_
if result is url_x000D_
result = if url.indexOf('?') != -1 then url.split('?')[0] + '?' + newParam + '&' + url.split('?')[1]_x000D_
else if url.indexOf('#') != -1 then url.split('#')[0] + '?' + newParam + '#' + url.split('#')[1]_x000D_
else url + '?' + newParam_x000D_
return result
_x000D_
JavaScript:
function addUrlParam(url, key, value) {_x000D_
var newParam = key+"="+value;_x000D_
var result = url.replace(new RegExp("(&|\\?)"+key+"=[^\&|#]*"), '$1' + newParam);_x000D_
if (result === url) { _x000D_
result = (url.indexOf("?") != -1 ? url.split("?")[0]+"?"+newParam+"&"+url.split("?")[1] _x000D_
: (url.indexOf("#") != -1 ? url.split("#")[0]+"?"+newParam+"#"+ url.split("#")[1] _x000D_
: url+'?'+newParam));_x000D_
}_x000D_
return result;_x000D_
}_x000D_
_x000D_
var url = "http://www.example.com?foo=bar&ciao=3&doom=5#hashme";_x000D_
result1.innerHTML = addUrlParam(url, "ciao", "1");
_x000D_
<p id="result1"></p>
_x000D_
The easy way to do is, put all your files including index.html or something with all resources such as CSS, JS etc. in a folder public or you can name it whatever you want and now you can use express js and just tell app to use the _dirname as :
In your server.js using express add these
var express = require('express');
var app = express();
app.use(express.static(__dirname + '/public'));
and if you want to have seprate directory add new dir under public directory and use that path "/public/YourDirName"
SO what we are doing here exactly? we are creating express instance named app and we are giving the adress if the public directory to access all the resources. Hope this helps !
Expanding on Chaoley's answer, I implemented a function that, given a list of locations (objects with lng
and lat
properties) whose coordinates are exactly the same, moves them away from their original location a little bit (modifying objects in place). They then form a nice circle around the center point.
I found that, for my latitude (52deg North), 0.0003 degrees of circle radius work best, and that you have to make up for the difference between latitude and longitude degrees when converted to kilometres. You can find approximate conversions for your latitude here.
var correctLocList = function (loclist) {
var lng_radius = 0.0003, // degrees of longitude separation
lat_to_lng = 111.23 / 71.7, // lat to long proportion in Warsaw
angle = 0.5, // starting angle, in radians
loclen = loclist.length,
step = 2 * Math.PI / loclen,
i,
loc,
lat_radius = lng_radius / lat_to_lng;
for (i = 0; i < loclen; ++i) {
loc = loclist[i];
loc.lng = loc.lng + (Math.cos(angle) * lng_radius);
loc.lat = loc.lat + (Math.sin(angle) * lat_radius);
angle += step;
}
};
The defaultSelected
attribute is not settable, it's just for informational purposes:
The defaultSelected property returns the default value of the selected attribute.
This property returns true if an option is selected by default, otherwise it returns false.
I think you want:
$('option[value=valueToSelect]', newOption).attr('selected', 'selected');
I.e. set the selected
attribute of the option
you want to select.
Without trying to fix your code, here's roughly how I would do it:
function buildSelect(options, default) {
// assume options = { value1 : 'Name 1', value2 : 'Name 2', ... }
// default = 'value1'
var $select = $('<select></select>');
var $option;
for (var val in options) {
$option = $('<option value="' + val + '">' + options[val] + '</option>');
if (val == default) {
$option.attr('selected', 'selected');
}
$select.append($option);
}
return $select;
}
You seem to have a lot of baggage and dependencies already and I can't tell you how to best integrate the selected option into your code without seeing more of it, but hopefully this helps.
(The typical method would be to use plot
just once to set up the limits, possibly to include the range of all series combined, and then to use points
and lines
to add the separate series.) To use plot
multiple times with par(new=TRUE)
you need to make sure that your first plot has a proper ylim
to accept the all series (and in another situation, you may need to also use the same strategy for xlim):
# first plot
plot(x, y1, ylim=range(c(y1,y2)))
# second plot EDIT: needs to have same ylim
par(new = TRUE)
plot(x, y2, ylim=range(c(y1,y2)), axes = FALSE, xlab = "", ylab = "")
This next code will do the task more compactly, by default you get numbers as points but the second one gives you typical R-type-"points":
matplot(x, cbind(y1,y2))
matplot(x, cbind(y1,y2), pch=1)
function readFromRegistry (strRegistryKey, strDefault )
Dim WSHShell, value
On Error Resume Next
Set WSHShell = CreateObject("WScript.Shell")
value = WSHShell.RegRead( strRegistryKey )
if err.number <> 0 then
readFromRegistry= strDefault
else
readFromRegistry=value
end if
set WSHShell = nothing
end function
Usage :
str = readfromRegistry("HKEY_LOCAL_MACHINE\SOFTWARE\Adobe\ESD\Install_Dir", "ha")
wscript.echo "returned " & str
in regard of answer from @Perception, its true to be very memory-consuming when working with byte arrays, but you could also simply write back into the outputstream
@Path("/picture")
public class ProfilePicture {
@GET
@Path("/thumbnail")
@Produces("image/png")
public StreamingOutput getThumbNail() {
return new StreamingOutput() {
@Override
public void write(OutputStream os) throws IOException, WebApplicationException {
//... read your stream and write into os
}
};
}
}
It's because of the Security options.
Go to System Preferences... > Security & Privacy
and there should be a button saying Open Anyway
, under the General
tab.
You can avoid doing this by changing the options under Allow apps downloaded from:
, however I would recommend keeping it at the default Mac App Store and identified developers
.
To remove remote branches:
$git remote prune origin
To remove local branches that are already merged:
$git branch -D $(git branch --merged)
The answer is correct, but you don't need to declare a variable. A string can contain your character:
"This string contains omega, that looks like this: \u03A9"
Unfortunately still those codes in ASCII are needed for displaying UTF-8, but I am still waiting (since too many years...) the day when UTF-8 will be same as ASCII was, and ASCII will be just a remembrance of the past.
You can change the behavior of the built in types in Python. For your case it's really easy to create a dict subclass that will store duplicated values in lists under the same key automatically:
class Dictlist(dict):
def __setitem__(self, key, value):
try:
self[key]
except KeyError:
super(Dictlist, self).__setitem__(key, [])
self[key].append(value)
Output example:
>>> d = dictlist.Dictlist()
>>> d['test'] = 1
>>> d['test'] = 2
>>> d['test'] = 3
>>> d
{'test': [1, 2, 3]}
>>> d['other'] = 100
>>> d
{'test': [1, 2, 3], 'other': [100]}
You didn't say what version you were using, but in SQL 2005 and above, you can use a common table expression with the OVER Clause. It goes a little something like this:
WITH cte AS (
SELECT[foo], [bar],
row_number() OVER(PARTITION BY foo, bar ORDER BY baz) AS [rn]
FROM TABLE
)
DELETE cte WHERE [rn] > 1
Play around with it and see what you get.
(Edit: In an attempt to be helpful, someone edited the ORDER BY
clause within the CTE. To be clear, you can order by anything you want here, it needn't be one of the columns returned by the cte. In fact, a common use-case here is that "foo, bar" are the group identifier and "baz" is some sort of time stamp. In order to keep the latest, you'd do ORDER BY baz desc
)
Just give you some idea how other databases solve this problem. DolphinDB
also has built-in support for pivoting and the sql looks much more intuitive and neat. It is as simple as specifying the key column (Store
), pivoting column (Week
), and the calculated metric (sum(xCount)
).
//prepare a 10-million-row table
n=10000000
t=table(rand(100, n) + 1 as Store, rand(54, n) + 1 as Week, rand(100, n) + 1 as xCount)
//use pivot clause to generate a pivoted table pivot_t
pivot_t = select sum(xCount) from t pivot by Store, Week
DolphinDB is a columnar high performance database. The calculation in the demo costs as low as 546 ms on a dell xps laptop (i7 cpu). To get more details, please refer to online DolphinDB manual https://www.dolphindb.com/help/index.html?pivotby.html
There is a bunch on here:
http://www.webservicex.net/WS/wscatlist.aspx
Just google for "Free WebService" or "Open WebService" and you'll find tons of open SOAP endpoints.
Remember, you can get a WSDL from any ASMX endpoint by adding ?WSDL to the url.
Just to add some performance insights.
A few years ago I had a project, where we had issues trying to set a large HTML / Text to various HTML elements.
It appeared, that "recreating" the element and injecting it to the DOM was way faster than any of the suggested methods to update the DOM content.
So something like:
var text = "very big content";_x000D_
$("#regTitle").remove();_x000D_
$("<div id='regTitle'>" + text + "</div>").appendTo("body");
_x000D_
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
_x000D_
Should get you a better performance. I haven't recently tried to measure that (browsers should be clever these days), but if you're looking for performance it may help.
The downside is that you will have more work to keep the DOM and the references in your scripts pointing to the right object.
You can use literals, it's more compact.
NSString* myString = [@(17) stringValue];
(Boxes as a NSNumber and uses its stringValue method)
Here in 2017, Promises are built into JavaScript, they were added by the ES2015 spec (polyfills are available for outdated environments like IE8-IE11). The syntax they went with uses a callback you pass into the Promise
constructor (the Promise
executor) which receives the functions for resolving/rejecting the promise as arguments.
First, since async
now has a meaning in JavaScript (even though it's only a keyword in certain contexts), I'm going to use later
as the name of the function to avoid confusion.
Using native promises (or a faithful polyfill) it would look like this:
function later(delay) {
return new Promise(function(resolve) {
setTimeout(resolve, delay);
});
}
Note that that assumes a version of setTimeout
that's compliant with the definition for browsers where setTimeout
doesn't pass any arguments to the callback unless you give them after the interval (this may not be true in non-browser environments, and didn't used to be true on Firefox, but is now; it's true on Chrome and even back on IE8).
If you want your function to optionally pass a resolution value, on any vaguely-modern browser that allows you to give extra arguments to setTimeout
after the delay and then passes those to the callback when called, you can do this (current Firefox and Chrome; IE11+, presumably Edge; not IE8 or IE9, no idea about IE10):
function later(delay, value) {
return new Promise(function(resolve) {
setTimeout(resolve, delay, value); // Note the order, `delay` before `value`
/* Or for outdated browsers that don't support doing that:
setTimeout(function() {
resolve(value);
}, delay);
Or alternately:
setTimeout(resolve.bind(null, value), delay);
*/
});
}
If you're using ES2015+ arrow functions, that can be more concise:
function later(delay, value) {
return new Promise(resolve => setTimeout(resolve, delay, value));
}
or even
const later = (delay, value) =>
new Promise(resolve => setTimeout(resolve, delay, value));
If you want to make it possible to cancel the timeout, you can't just return a promise from later
, because promises can't be cancelled.
But we can easily return an object with a cancel
method and an accessor for the promise, and reject the promise on cancel:
const later = (delay, value) => {
let timer = 0;
let reject = null;
const promise = new Promise((resolve, _reject) => {
reject = _reject;
timer = setTimeout(resolve, delay, value);
});
return {
get promise() { return promise; },
cancel() {
if (timer) {
clearTimeout(timer);
timer = 0;
reject();
reject = null;
}
}
};
};
Live Example:
const later = (delay, value) => {_x000D_
let timer = 0;_x000D_
let reject = null;_x000D_
const promise = new Promise((resolve, _reject) => {_x000D_
reject = _reject;_x000D_
timer = setTimeout(resolve, delay, value);_x000D_
});_x000D_
return {_x000D_
get promise() { return promise; },_x000D_
cancel() {_x000D_
if (timer) {_x000D_
clearTimeout(timer);_x000D_
timer = 0;_x000D_
reject();_x000D_
reject = null;_x000D_
}_x000D_
}_x000D_
};_x000D_
};_x000D_
_x000D_
const l1 = later(100, "l1");_x000D_
l1.promise_x000D_
.then(msg => { console.log(msg); })_x000D_
.catch(() => { console.log("l1 cancelled"); });_x000D_
_x000D_
const l2 = later(200, "l2");_x000D_
l2.promise_x000D_
.then(msg => { console.log(msg); })_x000D_
.catch(() => { console.log("l2 cancelled"); });_x000D_
setTimeout(() => {_x000D_
l2.cancel();_x000D_
}, 150);
_x000D_
Usually you'll have a promise library (one you write yourself, or one of the several out there). That library will usually have an object that you can create and later "resolve," and that object will have a "promise" you can get from it.
Then later
would tend to look something like this:
function later() {
var p = new PromiseThingy();
setTimeout(function() {
p.resolve();
}, 2000);
return p.promise(); // Note we're not returning `p` directly
}
In a comment on the question, I asked:
Are you trying to create your own promise library?
and you said
I wasn't but I guess now that's actually what I was trying to understand. That how a library would do it
To aid that understanding, here's a very very basic example, which isn't remotely Promises-A compliant: Live Copy
<!DOCTYPE html>
<html>
<head>
<meta charset=utf-8 />
<title>Very basic promises</title>
</head>
<body>
<script>
(function() {
// ==== Very basic promise implementation, not remotely Promises-A compliant, just a very basic example
var PromiseThingy = (function() {
// Internal - trigger a callback
function triggerCallback(callback, promise) {
try {
callback(promise.resolvedValue);
}
catch (e) {
}
}
// The internal promise constructor, we don't share this
function Promise() {
this.callbacks = [];
}
// Register a 'then' callback
Promise.prototype.then = function(callback) {
var thispromise = this;
if (!this.resolved) {
// Not resolved yet, remember the callback
this.callbacks.push(callback);
}
else {
// Resolved; trigger callback right away, but always async
setTimeout(function() {
triggerCallback(callback, thispromise);
}, 0);
}
return this;
};
// Our public constructor for PromiseThingys
function PromiseThingy() {
this.p = new Promise();
}
// Resolve our underlying promise
PromiseThingy.prototype.resolve = function(value) {
var n;
if (!this.p.resolved) {
this.p.resolved = true;
this.p.resolvedValue = value;
for (n = 0; n < this.p.callbacks.length; ++n) {
triggerCallback(this.p.callbacks[n], this.p);
}
}
};
// Get our underlying promise
PromiseThingy.prototype.promise = function() {
return this.p;
};
// Export public
return PromiseThingy;
})();
// ==== Using it
function later() {
var p = new PromiseThingy();
setTimeout(function() {
p.resolve();
}, 2000);
return p.promise(); // Note we're not returning `p` directly
}
display("Start " + Date.now());
later().then(function() {
display("Done1 " + Date.now());
}).then(function() {
display("Done2 " + Date.now());
});
function display(msg) {
var p = document.createElement('p');
p.innerHTML = String(msg);
document.body.appendChild(p);
}
})();
</script>
</body>
</html>
I ran into this issue when I was trying to restore a backed up mysql database from a different server. What solved this issue for me was adding certain settings to my.conf (like in the questions above) and additionally changing the sql backup file:
Step 1: add or edit the following lines in my.conf:
innodb_page_size=32K
innodb_file_format=Barracuda
innodb_file_per_table=1
Step 2 add ROW_FORMAT=DYNAMIC to the table create statement in the sql backup file for the table that is causing this error:
DROP TABLE IF EXISTS `problematic_table`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `problematic_table` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
...
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=3 ROW_FORMAT=DYNAMIC;
the important change above is ROW_FORMAT=DYNAMIC; (that was not included in the orignal sql backup file)
source that helped me to resolve this issue: MariaDB and InnoDB MySQL Row size too large
The simplest way is:
echo debug_backtrace()[1]['function'];
I work on 60-70% zoom vue and my dropdown are unreadable so I made this simple code to overcome the issue
Note that I selected first all my dropdown lsts (CTRL+mouse click), went on formula tab, clicked "define name" and called them "ProduktSelection"
Private Sub Worksheet_SelectionChange(ByVal Target As Range)
Dim KeyCells As Range
Set KeyCells = Range("ProduktSelection")
If Not Application.Intersect(KeyCells, Range(Target.Address)) _
Is Nothing Then
ActiveWindow.Zoom = 100
End If
End Sub
I then have another sub
Private Sub Worksheet_Change(ByVal Target As Range)
where I come back to 65% when value is changed.
Can you provide an example, because put should work fine as well?
Documentation -
The type of request to make ("POST" or "GET"); the default is "GET". Note: Other HTTP request methods, such as PUT and DELETE, can also be used here, but they are not supported by all browsers.
Have the example in fiddle and the form parameters are passed fine (as it is put it will not be appended to url
) -
$.ajax({
url: '/echo/html/',
type: 'PUT',
data: "name=John&location=Boston",
success: function(data) {
alert('Load was performed.');
}
});
Demo tested from jQuery 1.3.2 onwards on Chrome.
I was creating a JSON-LD to create a site review script.
**DECLARE @json VARCHAR(MAX);**
The actual JSON is about 94K.
I got this to work by using the CAST('' AS VARCHAR(MAX)) + @json, as explained by other contributors:-
so **SET @json = CAST('' AS VARCHAR(MAX)) + (SELECT .....**
2/ I also had to change the Query Options:- Query Options -> 'results' -> 'grid' -> 'Maximum Characters received' -> 'non-XML Data' SET to 2000000. (I left the 'results' -> 'text' -> 'Maximum number of characters displayed in each column' as the default)
You might also find this useful (non-Windows only):
mysql> pager less -SFX
mysql> SELECT * FROM sometable;
This will pipe the outut through the less
command line tool which - with these parameters - will give you a tabular output that can be scrolled horizontally and vertically with the cursor keys.
Leave this view by hitting the q
key, which will quit the less
tool.
oracle driver. `
<dependency>
<groupId>com.hynnet</groupId>
<artifactId>jdbc-fo</artifactId>
<version>12.1.0.2</version>
</dependency>
`
Found an elegant way from MDN
var vegetables = ['parsnip', 'potato'];
var moreVegs = ['celery', 'beetroot'];
// Merge the second array into the first one
// Equivalent to vegetables.push('celery', 'beetroot');
Array.prototype.push.apply(vegetables, moreVegs);
console.log(vegetables); // ['parsnip', 'potato', 'celery', 'beetroot']
Or you can use the spread operator
feature of ES6:
let fruits = [ 'apple', 'banana'];
const moreFruits = [ 'orange', 'plum' ];
fruits.push(...moreFruits); // ["apple", "banana", "orange", "plum"]
Your syntax is for table valued function which return a resultset and can be queried like a table. For scalar function do
select dbo.fun_functional_score('01091400003') as [er]
C99 allows VLA. And it puts some restrictions on how to declare VLA. For details, refer to 6.7.5.2 of the standard. C++ disallows VLA. But g++ allows it.
While dragging the TextBox it self Press F4 for Properties and under the Textmode set to Multiline, The representation of multiline to a text box is it can be sizable at 6 sides. And no need to include any newline characters for getting multiline. May be you set it multiline but you dint increased the size of the Textbox at design time.
You need run visudo
and in the editor that it opens write:
igor ALL=(ALL) ALL
That line grants all permissions to user igor
.
If you want permit to run only some commands, you need to list them in the line:
igor ALL=(ALL) /bin/kill, /bin/ps
Remove any config in ~/.ssh/config
or other ssh config places which will disallow it to reach to Github servers
In my case it was below config, and I wasn't connected to VPN.
Host *
ProxyJump 10.0.0.50
Note :- If I am connected to VPN, This config would mean git clone or any git operation will be send via VPN which might be slow depending on type of VPN we are using.
CAUTION: This answer measures memory usage/available of the DEVICE. This is NOT what is available to your app. To measure what your APP is doing, and is PERMITTED to do, Use android developer's answer.
Android docs - ActivityManager.MemoryInfo
parse /proc/meminfo command. You can find reference code here: Get Memory Usage in Android
use below code and get current RAM:
MemoryInfo mi = new MemoryInfo();
ActivityManager activityManager = (ActivityManager) getSystemService(ACTIVITY_SERVICE);
activityManager.getMemoryInfo(mi);
double availableMegs = mi.availMem / 0x100000L;
//Percentage can be calculated for API 16+
double percentAvail = mi.availMem / (double)mi.totalMem * 100.0;
Explanation of the number 0x100000L
1024 bytes == 1 Kibibyte
1024 Kibibyte == 1 Mebibyte
1024 * 1024 == 1048576
1048576 == 0x100000
It's quite obvious that the number is used to convert from bytes to mebibyte
P.S: we need to calculate total memory only once. so call point 1 only once in your code and then after, you can call code of point 2 repetitively.
I believe some of the respondents of this question have missed the broader implication of the fold
function as an abstract tool. Yes, sum
can do the same thing for a list of integers, but this is a trivial case. fold
is more generic. It is useful when you have a sequence of data structures of varying shape and want to cleanly express an aggregation. So instead of having to build up a for
loop with an aggregate variable and manually recompute it each time, a fold
function (or the Python version, which reduce
appears to correspond to) allows the programmer to express the intent of the aggregation much more plainly by simply providing two things:
For mass assignment of values to an ActiveRecord model without saving, use either the assign_attributes
or attributes=
methods. These methods are available in Rails 3 and newer. However, there are minor differences and version-related gotchas to be aware of.
Both methods follow this usage:
@user.assign_attributes{ model: "Sierra", year: "2012", looks: "Sexy" }
@user.attributes = { model: "Sierra", year: "2012", looks: "Sexy" }
Note that neither method will perform validations or execute callbacks; callbacks and validation will happen when save
is called.
attributes=
differs slightly from assign_attributes
in Rails 3. attributes=
will check that the argument passed to it is a Hash, and returns immediately if it is not; assign_attributes
has no such Hash check. See the ActiveRecord Attribute Assignment API documentation for attributes=
.
The following invalid code will silently fail by simply returning without setting the attributes:
@user.attributes = [ { model: "Sierra" }, { year: "2012" }, { looks: "Sexy" } ]
attributes=
will silently behave as though the assignments were made successfully, when really, they were not.
This invalid code will raise an exception when assign_attributes
tries to stringify the hash keys of the enclosing array:
@user.assign_attributes([ { model: "Sierra" }, { year: "2012" }, { looks: "Sexy" } ])
assign_attributes
will raise a NoMethodError
exception for stringify_keys
, indicating that the first argument is not a Hash. The exception itself is not very informative about the actual cause, but the fact that an exception does occur is very important.
The only difference between these cases is the method used for mass assignment: attributes=
silently succeeds, and assign_attributes
raises an exception to inform that an error has occurred.
These examples may seem contrived, and they are to a degree, but this type of error can easily occur when converting data from an API, or even just using a series of data transformation and forgetting to Hash[]
the results of the final .map
. Maintain some code 50 lines above and 3 functions removed from your attribute assignment, and you've got a recipe for failure.
The lesson with Rails 3 is this: always use assign_attributes
instead of attributes=
.
In Rails 4, attributes=
is simply an alias to assign_attributes
. See the ActiveRecord Attribute Assignment API documentation for attributes=
.
With Rails 4, either method may be used interchangeably. Failure to pass a Hash as the first argument will result in a very helpful exception: ArgumentError: When assigning attributes, you must pass a hash as an argument.
If you're pre-flighting assignments in preparation to a save
, you might be interested in validating before save, as well. You can use the valid?
and invalid?
methods for this. Both return boolean values. valid?
returns true if the unsaved model passes all validations or false if it does not. invalid?
is simply the inverse of valid?
valid?
can be used like this:
@user.assign_attributes{ model: "Sierra", year: "2012", looks: "Sexy" }.valid?
This will give you the ability to handle any validations issues in advance of calling save
.
The reason why your X-axis plots frequencies only till 500 Hz is your command statement 'f = Fs/2*linspace(0,1,NFFT/2+1);'. Your Fs is 1000. So when you divide it by 2 & then multiply by values ranging from 0 to 1, it returns a vector of length NFFT/2+1. This vector consists of equally spaced frequency values, ranging from 0 to Fs/2 (i.e. 500 Hz). Since you plot using 'plot(f,2*abs(Y(1:NFFT/2+1)))' command, your X-axis limit is 500 Hz.
This is a good overview:
http://reference.jumpingmonkey.org/programming_languages/objective-c/types.html
or run this code:
32 bit process:
NSLog(@"Primitive sizes:");
NSLog(@"The size of a char is: %d.", sizeof(char));
NSLog(@"The size of short is: %d.", sizeof(short));
NSLog(@"The size of int is: %d.", sizeof(int));
NSLog(@"The size of long is: %d.", sizeof(long));
NSLog(@"The size of long long is: %d.", sizeof(long long));
NSLog(@"The size of a unsigned char is: %d.", sizeof(unsigned char));
NSLog(@"The size of unsigned short is: %d.", sizeof(unsigned short));
NSLog(@"The size of unsigned int is: %d.", sizeof(unsigned int));
NSLog(@"The size of unsigned long is: %d.", sizeof(unsigned long));
NSLog(@"The size of unsigned long long is: %d.", sizeof(unsigned long long));
NSLog(@"The size of a float is: %d.", sizeof(float));
NSLog(@"The size of a double is %d.", sizeof(double));
NSLog(@"Ranges:");
NSLog(@"CHAR_MIN: %c", CHAR_MIN);
NSLog(@"CHAR_MAX: %c", CHAR_MAX);
NSLog(@"SHRT_MIN: %hi", SHRT_MIN); // signed short int
NSLog(@"SHRT_MAX: %hi", SHRT_MAX);
NSLog(@"INT_MIN: %i", INT_MIN);
NSLog(@"INT_MAX: %i", INT_MAX);
NSLog(@"LONG_MIN: %li", LONG_MIN); // signed long int
NSLog(@"LONG_MAX: %li", LONG_MAX);
NSLog(@"ULONG_MAX: %lu", ULONG_MAX); // unsigned long int
NSLog(@"LLONG_MIN: %lli", LLONG_MIN); // signed long long int
NSLog(@"LLONG_MAX: %lli", LLONG_MAX);
NSLog(@"ULLONG_MAX: %llu", ULLONG_MAX); // unsigned long long int
When run on an iPhone 3GS (iPod Touch and older iPhones should yield the same result) you get:
Primitive sizes:
The size of a char is: 1.
The size of short is: 2.
The size of int is: 4.
The size of long is: 4.
The size of long long is: 8.
The size of a unsigned char is: 1.
The size of unsigned short is: 2.
The size of unsigned int is: 4.
The size of unsigned long is: 4.
The size of unsigned long long is: 8.
The size of a float is: 4.
The size of a double is 8.
Ranges:
CHAR_MIN: -128
CHAR_MAX: 127
SHRT_MIN: -32768
SHRT_MAX: 32767
INT_MIN: -2147483648
INT_MAX: 2147483647
LONG_MIN: -2147483648
LONG_MAX: 2147483647
ULONG_MAX: 4294967295
LLONG_MIN: -9223372036854775808
LLONG_MAX: 9223372036854775807
ULLONG_MAX: 18446744073709551615
64 bit process:
The size of a char is: 1.
The size of short is: 2.
The size of int is: 4.
The size of long is: 8.
The size of long long is: 8.
The size of a unsigned char is: 1.
The size of unsigned short is: 2.
The size of unsigned int is: 4.
The size of unsigned long is: 8.
The size of unsigned long long is: 8.
The size of a float is: 4.
The size of a double is 8.
Ranges:
CHAR_MIN: -128
CHAR_MAX: 127
SHRT_MIN: -32768
SHRT_MAX: 32767
INT_MIN: -2147483648
INT_MAX: 2147483647
LONG_MIN: -9223372036854775808
LONG_MAX: 9223372036854775807
ULONG_MAX: 18446744073709551615
LLONG_MIN: -9223372036854775808
LLONG_MAX: 9223372036854775807
ULLONG_MAX: 18446744073709551615
I copied this from one of my apps, so there's prob a few extra attributes, but should give you the idea. This is from the layout that has the progress bar:
<ProgressBar
android:id="@+id/ProgressBar"
style="?android:attr/progressBarStyleHorizontal"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:indeterminate="false"
android:maxHeight="10dip"
android:minHeight="10dip"
android:progress="50"
android:progressDrawable="@drawable/greenprogress" />
Then create a new drawable with something similar to the following (In this case greenprogress.xml
):
<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
<item android:id="@android:id/background">
<shape>
<corners android:radius="5dip" />
<gradient
android:angle="270"
android:centerColor="#ff5a5d5a"
android:centerY="0.75"
android:endColor="#ff747674"
android:startColor="#ff9d9e9d" />
</shape>
</item>
<item android:id="@android:id/secondaryProgress">
<clip>
<shape>
<corners android:radius="5dip" />
<gradient
android:angle="270"
android:centerColor="#80ffb600"
android:centerY="0.75"
android:endColor="#a0ffcb00"
android:startColor="#80ffd300" />
</shape>
</clip>
</item>
<item android:id="@android:id/progress">
<clip>
<shape>
<corners android:radius="5dip" />
<gradient
android:angle="270"
android:endColor="#008000"
android:startColor="#33FF33" />
</shape>
</clip>
</item>
</layer-list>
You can change up the colors as needed, this will give you a green progress bar.
This is the closest thing you can get to a "CONST Dictionary":
public static int GetValueByName(string name)
{
switch (name)
{
case "bob": return 1;
case "billy": return 2;
default: return -1;
}
}
The compiler will be smart enough to build the code as clean as possible.
If you are on the first line, pressing (upper case) J will join that line and the next line together, removing the newline. You can also combine this with a count, so pressing 3J
will combine all 3 lines together.
Add System.Configuration
as a reference then:
using System.Configuration;
...
string conn =
ConfigurationManager.ConnectionStrings["ConnectionName"].ConnectionString;
You can not call remote set-url origin
just after git init
, Because the git remote set-url
command will not create origin, but it changes an existing remote repository URL.
so the command git remote set-url
will only work if you've either cloned the repository or manually added a remote called origin.
you can check remote with command git remote -v
it will show remote url after name, or if this command gives error like fatal: Not a git repository (or any of the parent directories): .git
then the repository not exists, so you have to add origin with command git remote add
1. git remote add
This command is used to add a new remote, you can use this command on the terminal, in the directory of your repository.
The git remote add command takes two arguments:
For example:
git remote add origin https://github.com/user/repo.git
2.git remote set-url
The git remote set-url command changes an existing remote repository URL.
The git remote set-url command takes two arguments:
origin
or upstream
are two common choices.For example you can change your remote's URL from SSH to HTTPS with the git remote set-url
command.
git remote set-url origin https://github.com/USERNAME/REPOSITORY.git
you can verify that the remote URL has changed, with command git remote -v
.
note: "origin" is a convention not part of the command. "origin" is the local name of the remote repository. you can use any name instead of "origin".
For example:
git remote add myorigin [email protected]:user/repo.git
git remote set-url myorigin https://github.com/user/repo.git
References from github: remote add, remote set-url
for (int i = 0; i < list.size(); i++) {
for (int j = i+1; j < list.size(); j++) {
// compare list.get(i) and list.get(j)
}
}
Sometimes autoplay is needed. Someone once pointed out that the famous Les Paul Google Doodle (2011) required autoplay, even though the sound didn't play until you moused over the guitar strings. If it's done with class and great design it can be beautiful (especially movie websites with immersive design)
public class SimpleObjectDictionaryMapper<TObject>
{
public static TObject GetObject(IDictionary<string, object> d)
{
PropertyInfo[] props = typeof(TObject).GetProperties();
TObject res = Activator.CreateInstance<TObject>();
for (int i = 0; i < props.Length; i++)
{
if (props[i].CanWrite && d.ContainsKey(props[i].Name))
{
props[i].SetValue(res, d[props[i].Name], null);
}
}
return res;
}
public static IDictionary<string, object> GetDictionary(TObject o)
{
IDictionary<string, object> res = new Dictionary<string, object>();
PropertyInfo[] props = typeof(TObject).GetProperties();
for (int i = 0; i < props.Length; i++)
{
if (props[i].CanRead)
{
res.Add(props[i].Name, props[i].GetValue(o, null));
}
}
return res;
}
}
Try this if the above solution aren't working, worked for me
Cut the whole contents in the worksheet using "Ctrl + A" followed by "Ctrl + X" and paste it to a new sheet. Your reference to formulas will remain intact when you cut paste.
I know its older post but still posting for someone like me. This worked for me
<link rel='shortcut icon' type='image/x-icon' href='favicon.ico' />
put your favicon icon on root directory..
$.getJSON('//freegeoip.net/json/?callback=?', function(data) {
console.log(JSON.stringify(data, null, 2));
});
You could use this query to get the no of indexes as well as the index names of each table in specified database.
SELECT TABLE_NAME,
COUNT(1) index_count,
GROUP_CONCAT(DISTINCT(index_name) SEPARATOR ',\n ') indexes
FROM INFORMATION_SCHEMA.STATISTICS
WHERE TABLE_SCHEMA = 'mydb'
AND INDEX_NAME != 'primary'
GROUP BY TABLE_NAME
ORDER BY COUNT(1) DESC;
You could use awk for this. Change '$2' to the nth column you want.
awk -F "\"*,\"*" '{print $2}' textfile.csv
To concatenate long lines without whitespace, use double quotes and escape the newlines with backslashes:
key: "Loremipsumdolorsitamet,consecteturadipiscingelit,seddoeiusmodtemp\
orincididuntutlaboreetdoloremagnaaliqua."
(Thanks @Tobia)
I think this thread actually has the easiest, cleanest solution, that does not sacrifice the JSON martialing tools that Spring provides:
Yes, you want to use tee
:
tee - read from standard input and write to standard output and files
Just pipe your command to tee and pass the file as an argument, like so:
exec 1 | tee ${LOG_FILE}
exec 2 | tee ${LOG_FILE}
This both prints the output to the STDOUT and writes the same output to a log file. See man tee
for more information.
Note that this won't write stderr to the log file, so if you want to combine the two streams then use:
exec 1 2>&1 | tee ${LOG_FILE}
and
ing strings does not do what you think it does - use any to check if any of the strings are in the list:
while not any(word in list_of_words for word in ['AND', 'OR', 'NOT']):
print 'No boolean'
Also, if you want a simple check, an if
might be better suited than a while
...
For html generated in JS , do as below (we are using single quote as string wrapper). Each argument has to wrapped in a single quote else all of yours argument will be considered as a single argument like functionName('a,b') , now its a single argument with value a,b.
We have to use string escape character backslash() to close first argument with single quote, give a separator comma in between and then start next argument with a single quote. (This is the magic code to use
'\',\''
)
$('#ValuationAssignedTable').append('<tr> <td><a href=# onclick="return ReAssign(\'' + valuationId +'\',\'' + user + '\')">Re-Assign</a> </td> </tr>');
That header file is not part of the C++ standard, is therefore non-portable, and should be avoided.
Moreover, even if there were some catch-all header in the standard, you would want to avoid it in lieu of specific headers, since the compiler has to actually read in and parse every included header (including recursively included headers) every single time that translation unit is compiled.
HTML
<input type="button">
CSS
input[type=button]{
background:transparent;
border:0;
}
If you have something else altering the DB (say another process) and need to ensure you see these changes, use AsNoTracking()
, otherwise EF may give you the last copy that your context had instead, hence it being good to usually use a new context every query:
http://codethug.com/2016/02/19/Entity-Framework-Cache-Busting/
I did some modification to the SMNALLY's code so it can run in Python 3.5.2. This is my result:
#Import the following library to make use of the DispatchEx to run the macro
import win32com.client as wincl
def runMacro():
if os.path.exists("C:\\Users\\Dev\\Desktop\\Development\\completed_apps\\My_Macr_Generates_Data.xlsm"):
# DispatchEx is required in the newest versions of Python.
excel_macro = wincl.DispatchEx("Excel.application")
excel_path = os.path.expanduser("C:\\Users\\Dev\\Desktop\\Development\\completed_apps\\My_Macr_Generates_Data.xlsm")
workbook = excel_macro.Workbooks.Open(Filename = excel_path, ReadOnly =1)
excel_macro.Application.Run\
("ThisWorkbook.Template2G")
#Save the results in case you have generated data
workbook.Save()
excel_macro.Application.Quit()
del excel_macro
You are currently comparing the addresses of the two strings.
Use strcmp to compare the values of two char arrays
if (strcmp(namet2, nameIt2) != 0)
Use :
norm = [float(i)/sum(raw) for i in raw]
to normalize against the sum to ensure that the sum is always 1.0 (or as close to as possible).
use
norm = [float(i)/max(raw) for i in raw]
to normalize against the maximum
This exact error was related to a content block by Youtube when "playbacked on certain sites or applications". More specifically by WMG (Warner Music Group).
The error message did however suggest that a https iframe import to a http site was the issue, which it wasn't in this case.
Very short summarized, I will say that Spring is the "glue" in your application. It's used to integrate different frameworks and your own code.
These two style of filtering are equivalent in most cases, but when query on objects base on ForeignKey or ManyToManyField, they are slightly different.
Examples from the documentation.
model
Blog to Entry is a one-to-many relation.
from django.db import models
class Blog(models.Model):
...
class Entry(models.Model):
blog = models.ForeignKey(Blog)
headline = models.CharField(max_length=255)
pub_date = models.DateField()
...
objects
Assuming there are some blog and entry objects here.
queries
Blog.objects.filter(entry__headline_contains='Lennon',
entry__pub_date__year=2008)
Blog.objects.filter(entry__headline_contains='Lennon').filter(
entry__pub_date__year=2008)
For the 1st query (single filter one), it match only blog1.
For the 2nd query (chained filters one), it filters out blog1 and blog2.
The first filter restricts the queryset to blog1, blog2 and blog5; the second filter restricts the set of blogs further to blog1 and blog2.
And you should realize that
We are filtering the Blog items with each filter statement, not the Entry items.
So, it's not the same, because Blog and Entry are multi-valued relationships.
Reference: https://docs.djangoproject.com/en/1.8/topics/db/queries/#spanning-multi-valued-relationships
If there is something wrong, please correct me.
Edit: Changed v1.6 to v1.8 since the 1.6 links are no longer available.
XmlTextWriter xw = new XmlTextWriter(writer);
xw.Formatting = Formatting.Indented;
Something like the standard Collection.toArray(T[]) should do what you need (note that ArrayList
implements Collection
):
TypeA[] array = a.toArray(new TypeA[a.size()]);
On a side note, you should consider defining a
to be of type List<TypeA>
rather than ArrayList<TypeA>
, this avoid some implementation specific definition that may not really be applicable for your application.
Also, please see this question about the use of a.size()
instead of 0
as the size of the array passed to a.toArray(TypeA[])
Assume that you have a tr row with multiple checkboxes in it, and you want to count only if the first checkbox is checked.
You can do that by giving a class to the first checkbox
For example class='mycxk'
and you can count that using the filter, like this
$('.mycxk').filter(':checked').length
Both setInterval
and setTimeout
can work for you (as @Doug Neiner and @John Boker wrote both now point to setInterval
).
See here for some more explanation about both to see which suites you most and how to stop each of them.
The accepted answer only shows files in the current directory's tree. To show all of the tracked files that have been committed (on the current branch), use
git ls-tree --full-tree --name-only -r HEAD
--full-tree
makes the command run as if you were in the repo's root directory.-r
recurses into subdirectories. Combined with --full-tree
, this gives you all committed, tracked files.--name-only
removes SHA / permission info for when you just want the file paths.HEAD
specifies which branch you want the list of tracked, committed files for. You could change this to master
or any other branch name, but HEAD
is the commit you have checked out right now.This is the method from the accepted answer to the ~duplicate question https://stackoverflow.com/a/8533413/4880003.
I think the V$SQLAREA table contains what you're looking for (see columns SQL_TEXT and SQL_FULLTEXT).
Simply put, you are doing this completely backward.
You should not be approaching this from what URLs you should be using. The URLs will effectively come "for free" once you've decided upon what resources are necessary for your system AND how you will represent those resources, and the interactions between the resources and application state.
To quote Roy Fielding
A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types). [Failure here implies that out-of-band information is driving interaction instead of hypertext.]
Folks always start with the URIs and think this is the solution, and then they tend to miss a key concept in REST architecture, notably, as quoted above, "Failure here implies that out-of-band information is driving interaction instead of hypertext."
To be honest, many see a bunch of URIs and some GETs and PUTs and POSTs and think REST is easy. REST is not easy. RPC over HTTP is easy, moving blobs of data back and forth proxied through HTTP payloads is easy. REST, however, goes beyond that. REST is protocol agnostic. HTTP is just very popular and apt for REST systems.
REST lives in the media types, their definitions, and how the application drives the actions available to those resources via hypertext (links, effectively).
There are different view about media types in REST systems. Some favor application specific payloads, while others like uplifting existing media types in to roles that are appropriate for the application. For example, on the one hand you have specific XML schemas designed suited to your application versus using something like XHTML as your representation, perhaps through microformats and other mechanisms.
Both approaches have their place, I think, the XHTML working very well in scenarios that overlap both the human driven and machine driven web, whereas the former, more specific data types I feel better facilitate machine to machine interactions. I find the uplifting of commodity formats can make content negotiation potentially difficult. "application/xml+yourresource" is much more specific as a media type than "application/xhtml+xml", as the latter can apply to many payloads which may or may not be something a machine client is actually interested in, nor can it determine without introspection.
However, XHTML works very well (obviously) in the human web where web browsers and rendering is very important.
You application will guide you in those kinds of decisions.
Part of the process of designing a REST system is discovering the first class resources in your system, along with the derivative, support resources necessary to support the operations on the primary resources. Once the resources are discovered, then the representation of those resources, as well as the state diagrams showing resource flow via hypertext within the representations because the next challenge.
Recall that each representation of a resource, in a hypertext system, combines both the actual resource representation along with the state transitions available to the resource. Consider each resource a node in a graph, with the links being the lines leaving that node to other states. These links inform clients not only what can be done, but what is required for them to be done (as a good link combines the URI and the media type required).
For example, you may have:
<link href="http://example.com/users" rel="users" type="application/xml+usercollection"/>
<link href="http://example.com/users?search" rel="search" type="application/xml+usersearchcriteria"/>
Your documentation will talk about the rel field named "users", and the media type of "application/xml+youruser".
These links may seem redundant, they're all talking to the same URI, pretty much. But they're not.
This is because for the "users" relation, that link is talking about the collection of users, and you can use the uniform interface to work with the collection (GET to retrieve all of them, DELETE to delete all of them, etc.)
If you POST to this URL, you will need to pass a "application/xml+usercollection" document, which will probably only contain a single user instance within the document so you can add the user, or not, perhaps, to add several at once. Perhaps your documentation will suggest that you can simply pass a single user type, instead of the collection.
You can see what the application requires in order to perform a search, as defined by the "search" link and it's mediatype. The documentation for the search media type will tell you how this behaves, and what to expect as results.
The takeaway here, though, is the URIs themselves are basically unimportant. The application is in control of the URIs, not the clients. Beyond a few 'entry points', your clients should rely on the URIs provided by the application for its work.
The client needs to know how to manipulate and interpret the media types, but doesn't much need to care where it goes.
These two links are semantically identical in a clients eyes:
<link href="http://example.com/users?search" rel="search" type="application/xml+usersearchcriteria"/>
<link href="http://example.com/AW163FH87SGV" rel="search" type="application/xml+usersearchcriteria"/>
So, focus on your resources. Focus on their state transitions in the application and how that's best achieved.
Answers above describe well why and how it is used on twitter and facebook, what I missed is explanation what #
does by default...
On a 'normal' (not a single page application) you can do anchoring with hash
to any element that has id by placing that elements id in url after hash #
Example:
(on Chrome) Click F12 or Rihgt Mouse and Inspect element
then take id="answer-10831233"
and add to url like following
https://stackoverflow.com/questions/3009380/whats-the-shebang-hashbang-in-facebook-and-new-twitter-urls-for#answer-10831233
and you will get a link that jumps to that element on the page
What's the shebang/hashbang (#!) in Facebook and new Twitter URLs for?
By using #
in a way described in the answers above you are introducing conflicting behaviour... although I wouldn't loose sleep over it... since Angular it became somewhat of a standard....
I know it sounds crazy, but I solved this issue by changing the datagrid.visible property to false and then to true. It causes a very small blink, but it works for me (at least for now).
import java.io.UnsupportedEncodingException;
/**
* File: BOM.java
*
* check if the bom character is present in the given string print the string
* after skipping the utf-8 bom characters print the string as utf-8 string on a
* utf-8 console
*/
public class BOM
{
private final static String BOM_STRING = "Hello World";
private final static String ISO_ENCODING = "ISO-8859-1";
private final static String UTF8_ENCODING = "UTF-8";
private final static int UTF8_BOM_LENGTH = 3;
public static void main(String[] args) throws UnsupportedEncodingException {
final byte[] bytes = BOM_STRING.getBytes(ISO_ENCODING);
if (isUTF8(bytes)) {
printSkippedBomString(bytes);
printUTF8String(bytes);
}
}
private static void printSkippedBomString(final byte[] bytes) throws UnsupportedEncodingException {
int length = bytes.length - UTF8_BOM_LENGTH;
byte[] barray = new byte[length];
System.arraycopy(bytes, UTF8_BOM_LENGTH, barray, 0, barray.length);
System.out.println(new String(barray, ISO_ENCODING));
}
private static void printUTF8String(final byte[] bytes) throws UnsupportedEncodingException {
System.out.println(new String(bytes, UTF8_ENCODING));
}
private static boolean isUTF8(byte[] bytes) {
if ((bytes[0] & 0xFF) == 0xEF &&
(bytes[1] & 0xFF) == 0xBB &&
(bytes[2] & 0xFF) == 0xBF) {
return true;
}
return false;
}
}
Wireshark + OSX + iOS:
Great overview so far, but if you want specifics for Wireshark + OSX + iOS:
rvictl -s x
where x
is the UDID of your iOS device. You can find the UDID of your iOS device via iTunes (make sure you are using the UDID and not the serial number).Capture->Options
, a dialog box appears, click on the line rvi0
then press the Start
button.Now you will see all network traffic on the iOS device. It can be pretty overwhelming. A couple of pointers:
ip.addr==204.144.14.134
views traffic with a source or destination address of 204.144.14.134http
views only http trafficHere's a sample window depicting TCP traffic for for pdf download from 204.144.14.134:
The plugin that @DarinDimitrov posted, jquery-mousewheel
, is broken with jQuery 3+. It would be more advisable to use jquery-wheel
which works with jQuery 3+.
If you don't want to go the jQuery route, MDN highly cautions using the mousewheel
event as it's nonstandard and unsupported in many places. It instead says that you should use the wheel
event as you get much more specificity over exactly what the values you're getting mean. It's supported by most major browsers.
I had this problem. I searched the internet, took all advices, changes configurations, but the problem is still there. Finally with the help of the server administrator, he found that the problem lies in MySQL database column definition. one of the columns in the a table was assigned to 'Longtext' which leads to allocate 4,294,967,295 bites of memory. It seems working OK if you don't use MySqli prepare statement, but once you use prepare statement, it tries to allocate that amount of memory. I changed the column type to Mediumtext which needs 16,777,215 bites of memory space. The problem is gone. Hope this help.
I am just adding this answer for an awkward situation from DB provider.
what happened in our case is the primary and secondary db shifted reversely (primary to secondary and vice versa) and we are getting the same error.
so please check in the configuration settings for database status which may help you.
Here is a simple way to do this using either PDO or mysqli
$stmt = $pdo->prepare("SELECT Column FROM foo");
// careful, without a LIMIT this can take long if your table is huge
$stmt->execute();
$array = $stmt->fetchAll(PDO::FETCH_COLUMN);
print_r($array);
or, using mysqli
$stmt = $mysqli->prepare("SELECT Column FROM foo");
$stmt->execute();
$array = [];
foreach ($stmt->get_result() as $row)
{
$array[] = $row['column'];
}
print_r($array);
Array
(
[0] => 7960
[1] => 7972
[2] => 8028
[3] => 8082
[4] => 8233
)
You could split on all commas that do have an even number of quotes following them.
You would also like to view at the specf
for CSV format about handling comma's.
Useful Link : C# Regex Split - commas outside quotes
A cookie is basically just an item in a dictionary. Each item has a key and a value. For authentication, the key could be something like 'username' and the value would be the username. Each time you make a request to a website, your browser will include the cookies in the request, and the host server will check the cookies. So authentication can be done automatically like that.
To set a cookie, you just have to add it to the response the server sends back after requests. The browser will then add the cookie upon receiving the response.
There are different options you can configure for the cookie server side, like expiration times or encryption. An encrypted cookie is often referred to as a signed cookie. Basically the server encrypts the key and value in the dictionary item, so only the server can make use of the information. So then cookie would be secure.
A browser will save the cookies set by the server. In the HTTP header of every request the browser makes to that server, it will add the cookies. It will only add cookies for the domains that set them. Example.com can set a cookie and also add options in the HTTP header for the browsers to send the cookie back to subdomains, like sub.example.com. It would be unacceptable for a browser to ever sends cookies to a different domain.
Here you go:
<html>_x000D_
<head>_x000D_
<title>Cols</title>_x000D_
<style>_x000D_
#left {_x000D_
width: 200px;_x000D_
float: left;_x000D_
}_x000D_
#right {_x000D_
margin-left: 200px;_x000D_
/* Change this to whatever the width of your left column is*/_x000D_
}_x000D_
.clear {_x000D_
clear: both;_x000D_
}_x000D_
</style>_x000D_
</head>_x000D_
_x000D_
<body>_x000D_
<div id="container">_x000D_
<div id="left">_x000D_
Hello_x000D_
</div>_x000D_
<div id="right">_x000D_
<div style="background-color: red; height: 10px;">Hello</div>_x000D_
</div>_x000D_
<div class="clear"></div>_x000D_
</div>_x000D_
</body>_x000D_
_x000D_
</html>
_x000D_
See it in action here: http://jsfiddle.net/FVLMX/
https://onpyth.blogspot.com/2019/08/check-ping-connectivity-to-multiple-host.html
Above link is to create Python script for checking connectivity. You can use similar method and use:
ping -w 1 -c 1 "IP Address"
Command to create bash script.
The following code works as a count-up timer. It's pure JavaScript code which shows hour:minute:second
. It also has a STOP button:
var timerVar = setInterval(countTimer, 1000);
var totalSeconds = 0;
function countTimer() {
++totalSeconds;
var hour = Math.floor(totalSeconds /3600);
var minute = Math.floor((totalSeconds - hour*3600)/60);
var seconds = totalSeconds - (hour*3600 + minute*60);
if(hour < 10)
hour = "0"+hour;
if(minute < 10)
minute = "0"+minute;
if(seconds < 10)
seconds = "0"+seconds;
document.getElementById("timer").innerHTML = hour + ":" + minute + ":" + seconds;
}
_x000D_
<div id="timer"></div>
<div id ="stop_timer" onclick="clearInterval(timerVar)">Stop time</div>
_x000D_
It is possible to make a unique object identifier in Visual Studio: In the watch window, right-click the object variable and choose Make Object ID from the context menu.
Unfortunately, this is a manual step, and I don't believe the identifier can be accessed via code.
You have to check every ID in tableA against every ID in tableB. A fully featured RDBMS (such as Oracle) would be able to optimize that into an INDEX FULL FAST SCAN and not touch the table at all. I don't know whether H2's optimizer is as smart as that.
H2 does support the MINUS syntax so you should try this
select id from tableA
minus
select id from tableB
order by id desc
That may perform faster; it is certainly worth benchmarking.
There are different methods to make the empty DIV with float: left
or float: right
visible.
Here presents the ones I know:
width
(or min-width
) with height
(or min-height
)padding-top
padding-bottom
border-top
border-bottom
::before
or ::after
with:
{content: "\200B";}
{content: "."; visibility: hidden;}
inside DIV (this sometimes can bring unexpected effects eg. in combination with text-decoration: underline;
)Sometimes Excel will hiccup and needs a kick-start to reapply an equation. This happens in some cases when you are using custom formulas.
Make sure that you have the following script
ActiveSheet.EnableCalculation = True
Reapply the equation of choice.
Cells(RowA,ColB).Formula = Cells(RowA,ColB).Formula
This can then be looped as needed.
if you got troubles with windows cmd command and .bat just escape percents like that
git show -s --format=%%ct
The % character has a special meaning for command line parameters and FOR parameters. To treat a percent as a regular character, double it: %%
Here's a function that'll do it:
def totuple(a):
try:
return tuple(totuple(i) for i in a)
except TypeError:
return a
And an example:
>>> array = numpy.array(((2,2),(2,-2)))
>>> totuple(array)
((2, 2), (2, -2))
I used http://codingbat.com/ . A great website that not only takes one answer, like Project Euler, but also checks your code for more robustness by running it through multiple tests. It asks for much broader code than Project Euler, but its also much simpler than most Euler problems. It also has progress graphs which are pretty cool.
you can also pass the parameters through the command line. Command line arguments are stores in the array ARGV. so ARGV[0] is the first number and ARGV[1] the second number
#!/usr/bin/ruby
first_number = ARGV[0].to_i
second_number = ARGV[1].to_i
puts first_number + second_number
and you call it like this
% ./plus.rb 5 6
==> 11
Besides the for-loop based solutions, you can also use an ostream_iterator<>. Here's an example that leverages the sample code in the (now retired) SGI STL reference:
#include <iostream>
#include <iterator>
#include <algorithm>
int main()
{
short foo[] = { 1, 3, 5, 7 };
using namespace std;
copy(foo,
foo + sizeof(foo) / sizeof(foo[0]),
ostream_iterator<short>(cout, "\n"));
}
This generates the following:
./a.out
1
3
5
7
However, this may be overkill for your needs. A straight for-loop is probably all that you need, although litb's template sugar is quite nice, too.
Edit: Forgot the "printing in reverse" requirement. Here's one way to do it:
#include <iostream>
#include <iterator>
#include <algorithm>
int main()
{
short foo[] = { 1, 3, 5, 7 };
using namespace std;
reverse_iterator<short *> begin(foo + sizeof(foo) / sizeof(foo[0]));
reverse_iterator<short *> end(foo);
copy(begin,
end,
ostream_iterator<short>(cout, "\n"));
}
and the output:
$ ./a.out
7
5
3
1
Edit: C++14 update that simplifies the above code snippets using array iterator functions like std::begin() and std::rbegin():
#include <iostream>
#include <iterator>
#include <algorithm>
int main()
{
short foo[] = { 1, 3, 5, 7 };
// Generate array iterators using C++14 std::{r}begin()
// and std::{r}end().
// Forward
std::copy(std::begin(foo),
std::end(foo),
std::ostream_iterator<short>(std::cout, "\n"));
// Reverse
std::copy(std::rbegin(foo),
std::rend(foo),
std::ostream_iterator<short>(std::cout, "\n"));
}
You can (should) declare it as extern
in a header file, and define it in exactly 1 .c file.
Note that that .c file should also use the header and that the standard pattern looks like:
// file.h
extern int x; // declaration
// file.c
#include "file.h"
int x = 1; // definition and re-declaration
You can use the below query to identify the values. But please keep in mind that this will not give you the results from encrypted stored procedure.
SELECT DISTINCT OBJECT_NAME(comments.id) OBJECT_NAME
,objects.type_desc
FROM syscomments comments
,sys.objects objects
WHERE comments.id = objects.object_id
AND TEXT LIKE '%CreatedDate%'
ORDER BY 1
Well Curl could be a better option for json representation but in that case it would be difficult to understand the structure of json because its in command line. if you want to get your json on browser you simply remove all the XML Annotations like -
@XmlRootElement(name="person")
@XmlAccessorType(XmlAccessType.NONE)
@XmlAttribute
@XmlElement
from your model class and than run the same url, you have used for xml representation.
Make sure that you have jacson-databind dependency in your pom.xml
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.4.1</version>
</dependency>
The issue here is that you've opened a file and read its contents so the cursor is at the end of the file. By writing to the same file handle, you're essentially appending to the file.
The easiest solution would be to close the file after you've read it in, then reopen it for writing.
with open("replayScript.json", "r") as jsonFile:
data = json.load(jsonFile)
data["location"] = "NewPath"
with open("replayScript.json", "w") as jsonFile:
json.dump(data, jsonFile)
Alternatively, you can use seek()
to move the cursor back to the beginning of the file then start writing, followed by a truncate()
to deal with the case where the new data is smaller than the previous.
with open("replayScript.json", "r+") as jsonFile:
data = json.load(jsonFile)
data["location"] = "NewPath"
jsonFile.seek(0) # rewind
json.dump(data, jsonFile)
jsonFile.truncate()
TL;DR: sleep infinity
actually sleeps the maximum time allowed, which is finite.
Wondering why this is not documented anywhere, I bothered to read the sources from GNU coreutils and I found it executes roughly what follows:
strtod
from C stdlib on the first argument to convert 'infinity' to a double precision value. So, assuming IEEE 754 double precision the 64-bit positive infinity value is stored in the seconds
variable.xnanosleep(seconds)
(found in gnulib), this in turn invokes dtotimespec(seconds)
(also in gnulib) to convert from double
to struct timespec
.struct timespec
is just a pair of numbers: integer part (in seconds) and fractional part (in nanoseconds).
Naïvely converting positive infinity to integer would result in undefined behaviour (see §6.3.1.4 from C standard), so instead it truncates to TYPE_MAXIMUM(time_t)
.TYPE_MAXIMUM(time_t)
is not set in the standard (even sizeof(time_t)
isn't); so, for the sake of example let's pick x86-64 from a recent Linux kernel.This is TIME_T_MAX
in the Linux kernel, which is defined (time.h
) as:
(time_t)((1UL << ((sizeof(time_t) << 3) - 1)) - 1)
Note that time_t
is __kernel_time_t
and time_t
is long
; the LP64 data model is used, so sizeof(long)
is 8 (64 bits).
Which results in: TIME_T_MAX = 9223372036854775807
.
That is: sleep infinite
results in an actual sleep time of 9223372036854775807 seconds (10^11 years). And for 32-bit linux systems (sizeof(long)
is 4 (32 bits)): 2147483647 seconds (68 years; see also year 2038 problem).
Edit: apparently the nanoseconds
function called is not directly the syscall, but an OS-dependent wrapper (also defined in gnulib).
There's an extra step as a result: for some systems where HAVE_BUG_BIG_NANOSLEEP
is true
the sleep is truncated to 24 days and then called in a loop. This is the case for some (or all?) Linux distros. Note that this wrapper may be not used if a configure-time test succeeds (source).
In particular, that would be 24 * 24 * 60 * 60 = 2073600 seconds
(plus 999999999 nanoseconds); but this is called in a loop in order to respect the specified total sleep time. Therefore the previous conclusions remain valid.
In conclusion, the resulting sleep time is not infinite but high enough for all practical purposes, even if the resulting actual time lapse is not portable; that depends on the OS and architecture.
To answer the original question, this is obviously good enough but if for some reason (a very resource-constrained system) you really want to avoid an useless extra countdown timer, I guess the most correct alternative is to use the cat
method described in other answers.
Edit: recent GNU coreutils versions will try to use the pause
syscall (if available) instead of looping. The previous argument is no longer valid when targeting these newer versions in Linux (and possibly BSD).
This is an important valid concern:
sleep infinity
is a GNU coreutils extension not contemplated in POSIX. GNU's implementation also supports a "fancy" syntax for time durations, like sleep 1h 5.2s
while POSIX only allows a positive integer (e.g. sleep 0.5
is not allowed).FANCY_SLEEP
and FLOAT_DURATION
).strtod
behaviour is C and POSIX compatible (i.e. strtod("infinity", 0)
is always valid in C99-conformant implementations, see §7.20.1.3).Please check the xml file whether it has any junk character like this ?.If exists,please use the following syntax to remove that.
String XString = writer.toString();
XString = XString.replaceAll("[^\\x20-\\x7e]", "");
If you are still thinking what the! You would not be alone, its actually not that complicated really, let me explain.
We want to turn the following list into a dictionary using the odd entries (counting from 1) as keys mapped to their consecutive even entries.
l = ["a", "b", "c", "d", "e"]
To create a dictionary we can use the built in dict
function for Mapping Types as per the manual the following methods are supported.
dict(one=1, two=2)
dict({'one': 1, 'two': 2})
dict(zip(('one', 'two'), (1, 2)))
dict([['two', 2], ['one', 1]])
The last option suggests that we supply a list of lists with 2 values or (key, value)
tuples, so we want to turn our sequential list into:
l = [["a", "b"], ["c", "d"], ["e",]]
We are also introduced to the zip
function, one of the built-in functions which the manual explains:
returns a list of tuples, where the i-th tuple contains the i-th element from each of the arguments
In other words if we can turn our list into two lists a, c, e
and b, d
then zip
will do the rest.
Slicings which we see used with Strings and also further on in the List section which mainly uses the range or short slice notation but this is what the long slice notation looks like and what we can accomplish with step:
>>> l[::2]
['a', 'c', 'e']
>>> l[1::2]
['b', 'd']
>>> zip(['a', 'c', 'e'], ['b', 'd'])
[('a', 'b'), ('c', 'd')]
>>> dict(zip(l[::2], l[1::2]))
{'a': 'b', 'c': 'd'}
Even though this is the simplest way to understand the mechanics involved there is a downside because slices are new list objects each time, as can be seen with this cloning example:
>>> a = [1, 2, 3]
>>> b = a
>>> b
[1, 2, 3]
>>> b is a
True
>>> b = a[:]
>>> b
[1, 2, 3]
>>> b is a
False
Even though b looks like a they are two separate objects now and this is why we prefer to use the grouper recipe instead.
Although the grouper is explained as part of the itertools module it works perfectly fine with the basic functions too.
Some serious voodoo right? =) But actually nothing more than a bit of syntax sugar for spice, the grouper recipe is accomplished by the following expression.
*[iter(l)]*2
Which more or less translates to two arguments of the same iterator wrapped in a list, if that makes any sense. Lets break it down to help shed some light.
>>> l*2
['a', 'b', 'c', 'd', 'e', 'a', 'b', 'c', 'd', 'e']
>>> [l]*2
[['a', 'b', 'c', 'd', 'e'], ['a', 'b', 'c', 'd', 'e']]
>>> [iter(l)]*2
[<listiterator object at 0x100486450>, <listiterator object at 0x100486450>]
>>> zip([iter(l)]*2)
[(<listiterator object at 0x1004865d0>,),(<listiterator object at 0x1004865d0>,)]
>>> zip(*[iter(l)]*2)
[('a', 'b'), ('c', 'd')]
>>> dict(zip(*[iter(l)]*2))
{'a': 'b', 'c': 'd'}
As you can see the addresses for the two iterators remain the same so we are working with the same iterator which zip then first gets a key from and then a value and a key and a value every time stepping the same iterator to accomplish what we did with the slices much more productively.
You would accomplish very much the same with the following which carries a smaller What the? factor perhaps.
>>> it = iter(l)
>>> dict(zip(it, it))
{'a': 'b', 'c': 'd'}
What about the empty key e
if you've noticed it has been missing from all the examples which is because zip
picks the shortest of the two arguments, so what are we to do.
Well one solution might be adding an empty value to odd length lists, you may choose to use append
and an if
statement which would do the trick, albeit slightly boring, right?
>>> if len(l) % 2:
... l.append("")
>>> l
['a', 'b', 'c', 'd', 'e', '']
>>> dict(zip(*[iter(l)]*2))
{'a': 'b', 'c': 'd', 'e': ''}
Now before you shrug away to go type from itertools import izip_longest
you may be surprised to know it is not required, we can accomplish the same, even better IMHO, with the built in functions alone.
I prefer to use the map() function instead of izip_longest() which not only uses shorter syntax doesn't require an import but it can assign an actual None
empty value when required, automagically.
>>> l = ["a", "b", "c", "d", "e"]
>>> l
['a', 'b', 'c', 'd', 'e']
>>> dict(map(None, *[iter(l)]*2))
{'a': 'b', 'c': 'd', 'e': None}
Comparing performance of the two methods, as pointed out by KursedMetal, it is clear that the itertools module far outperforms the map function on large volumes, as a benchmark against 10 million records show.
$ time python -c 'dict(map(None, *[iter(range(10000000))]*2))'
real 0m3.755s
user 0m2.815s
sys 0m0.869s
$ time python -c 'from itertools import izip_longest; dict(izip_longest(*[iter(range(10000000))]*2, fillvalue=None))'
real 0m2.102s
user 0m1.451s
sys 0m0.539s
However the cost of importing the module has its toll on smaller datasets with map returning much quicker up to around 100 thousand records when they start arriving head to head.
$ time python -c 'dict(map(None, *[iter(range(100))]*2))'
real 0m0.046s
user 0m0.029s
sys 0m0.015s
$ time python -c 'from itertools import izip_longest; dict(izip_longest(*[iter(range(100))]*2, fillvalue=None))'
real 0m0.067s
user 0m0.042s
sys 0m0.021s
$ time python -c 'dict(map(None, *[iter(range(100000))]*2))'
real 0m0.074s
user 0m0.050s
sys 0m0.022s
$ time python -c 'from itertools import izip_longest; dict(izip_longest(*[iter(range(100000))]*2, fillvalue=None))'
real 0m0.075s
user 0m0.047s
sys 0m0.024s
See nothing to it! =)
nJoy!
It literally means "Not a Number." I suspect something is wrong with your conversion process.
Check out the Not A Number section at this reference
If you import the csv file as a data.frame (and not matrix), you can also use summary.default
summary.default(mtcars)
Length Class Mode
mpg 32 -none- numeric
cyl 32 -none- numeric
disp 32 -none- numeric
hp 32 -none- numeric
drat 32 -none- numeric
wt 32 -none- numeric
qsec 32 -none- numeric
vs 32 -none- numeric
am 32 -none- numeric
gear 32 -none- numeric
carb 32 -none- numeric
Swift 3 version code:
let urlString = "file:///Users/Documents/Book/Note.txt"
let pathURL = URL(string: urlString)!
print("the url = " + pathURL.path)
print os.path.abspath(os.path.join(os.getcwd(), os.path.pardir))
You can use this to get the parent directory of the current location of your py file.
You can disable them altogether by
(setq make-backup-files nil)
There are a few existing resources you might check:
For what it's worth, my own personal guidelines that I tend to use are as follows:
A couple of other points:
You need to create the project on GitHub first. After that go to the project directory and run in terminal:
git init
git remote add origin https://github.com/xxx/yyy.git
git add .
git commit -m "first commit"
git push -u origin master
This was how I added a tracking branch so I can pull from it into my new branch:
git branch --set-upstream-to origin/Development new-branch
The use of Swift 3 @objc inference in Swift 4 mode is deprecated?
use func call @objc
func call(){
foo()
}
@objc func foo() {
}
You can get at the data values like this:
string json = @"
[
{ ""General"" : ""At this time we do not have any frequent support requests."" },
{ ""Support"" : ""For support inquires, please see our support page."" }
]";
JArray a = JArray.Parse(json);
foreach (JObject o in a.Children<JObject>())
{
foreach (JProperty p in o.Properties())
{
string name = p.Name;
string value = (string)p.Value;
Console.WriteLine(name + " -- " + value);
}
}
Fiddle: https://dotnetfiddle.net/uox4Vt
I actually came to this question looking for an answer myself, and after reading the given answers I can only agree that the best way to compare DB queries performance is to get real-world numbers because there are just to many variables to be taken into account BUT, I also think that comparing the numbers between them leads to no good in almost all cases. What I mean is that the numbers should always be compared with an acceptable number and definitely not compared with each other.
I can understand if one way of querying takes say 0.02 seconds and the other one takes 20 seconds, that's an enormous difference. But what if one way of querying takes 0.0000000002 seconds, and the other one takes 0.0000002 seconds ? In both cases one way is a whopping 1000 times faster than the other one, but is it really still "whopping" in the second case ?
Bottom line as I personally see it: if it performs well, go for the easy solution.
onclick="history.go(-1)"
Simply
You can use the first selector.
var header = $('.header:first')
I have faced this particular error when I didn't defined the main() function. Check if the main() function exists or check the name of the function letter by letter as Timothy described above or check if the file where the main function is located is included to your project.
in my case it was a cross-origin frame and wasn't loading sometimes. the solution that worked for me is: if it's loaded successfully then if you try this code:
var iframe = document.getElementsByTagName('iframe')[0];
console.log(iframe.contentDocument);
it won't allow you to access contentDocument
and throw a cross-origin error
however if frame is not loaded successfully then contentDocument
will return a #document
object
If you're doing any sort of development you need to point to the JDK (Java Development Kit). Otherwise, you can point to the JRE (Java Runtime Environment).
The JDK contains everything the JRE has and more. If you're just executing Java programs, you can point to either the JRE or the JDK.
You should set JAVA_HOME
based on current Java you are using.
readlink
will print value of a symbolic link for current Java and sed
will adjust it to JRE directory:
export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")
If you want to set up JAVA_HOME to JDK you should go up one folder more:
export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:jre/bin/java::")
You can just locate pg_dump
and use the full path in command
locate pg_dump
/usr/bin/pg_dump
/usr/bin/pg_dumpall
/usr/lib/postgresql/9.3/bin/pg_dump
/usr/lib/postgresql/9.3/bin/pg_dumpall
/usr/lib/postgresql/9.6/bin/pg_dump
/usr/lib/postgresql/9.6/bin/pg_dumpall
Now just use the path of the desired version in the command
/usr/lib/postgresql/9.6/bin/pg_dump books > books.out
As there is no proper wheel file in http://www.lfd.uci.edu/~gohlke/pythonlibs/#opencv?
Try this:(Worked in Anaconda Prompt or Pycharm)
pip install opencv-contrib-python
pip install opencv-python
for your example:
{'profiles': [{'name':'john', 'age': 44}, {'name':'Alex','age':11}]}
you will have to do something of this effect:
JSONObject myjson = new JSONObject(the_json);
JSONArray the_json_array = myjson.getJSONArray("profiles");
this returns the array object.
Then iterating will be as follows:
int size = the_json_array.length();
ArrayList<JSONObject> arrays = new ArrayList<JSONObject>();
for (int i = 0; i < size; i++) {
JSONObject another_json_object = the_json_array.getJSONObject(i);
//Blah blah blah...
arrays.add(another_json_object);
}
//Finally
JSONObject[] jsons = new JSONObject[arrays.size()];
arrays.toArray(jsons);
//The end...
You will have to determine if the data is an array (simply checking that charAt(0)
starts with [
character).
Hope this helps.
In POSIX systems, you don't find -r
parameter for grep
and your grep -rn "stuff" .
won't run, but if you use find
command it will:
find . -type f -exec grep -n "stuff" {} \; -print
Agreed by Solaris
and HP-UX
.
I believe an UPDATE FROM
with a JOIN
will help:
UPDATE
Sales_Import
SET
Sales_Import.AccountNumber = RAN.AccountNumber
FROM
Sales_Import SI
INNER JOIN
RetrieveAccountNumber RAN
ON
SI.LeadID = RAN.LeadID;
UPDATE
Sales_Import SI,
RetrieveAccountNumber RAN
SET
SI.AccountNumber = RAN.AccountNumber
WHERE
SI.LeadID = RAN.LeadID;
public class ArrayInputAttribute : ActionFilterAttribute
{
private readonly string[] _ParameterNames;
/// <summary>
///
/// </summary>
public string Separator { get; set; }
/// <summary>
/// cons
/// </summary>
/// <param name="parameterName"></param>
public ArrayInputAttribute(params string[] parameterName)
{
_ParameterNames = parameterName;
Separator = ",";
}
/// <summary>
///
/// </summary>
public void ProcessArrayInput(HttpActionContext actionContext, string parameterName)
{
if (actionContext.ActionArguments.ContainsKey(parameterName))
{
var parameterDescriptor = actionContext.ActionDescriptor.GetParameters().FirstOrDefault(p => p.ParameterName == parameterName);
if (parameterDescriptor != null && parameterDescriptor.ParameterType.IsArray)
{
var type = parameterDescriptor.ParameterType.GetElementType();
var parameters = String.Empty;
if (actionContext.ControllerContext.RouteData.Values.ContainsKey(parameterName))
{
parameters = (string)actionContext.ControllerContext.RouteData.Values[parameterName];
}
else
{
var queryString = actionContext.ControllerContext.Request.RequestUri.ParseQueryString();
if (queryString[parameterName] != null)
{
parameters = queryString[parameterName];
}
}
var values = parameters.Split(new[] { Separator }, StringSplitOptions.RemoveEmptyEntries)
.Select(TypeDescriptor.GetConverter(type).ConvertFromString).ToArray();
var typedValues = Array.CreateInstance(type, values.Length);
values.CopyTo(typedValues, 0);
actionContext.ActionArguments[parameterName] = typedValues;
}
}
}
public override void OnActionExecuting(HttpActionContext actionContext)
{
_ParameterNames.ForEach(parameterName => ProcessArrayInput(actionContext, parameterName));
}
}
Usage:
[HttpDelete]
[ArrayInput("tagIDs")]
[Route("api/v1/files/{fileID}/tags/{tagIDs}")]
public HttpResponseMessage RemoveFileTags(Guid fileID, Guid[] tagIDs)
{
_FileRepository.RemoveFileTags(fileID, tagIDs);
return Request.CreateResponse(HttpStatusCode.OK);
}
Request uri
http://localhost/api/v1/files/2a9937c7-8201-59b7-bc8d-11a9178895d0/tags/BBA5CD5D-F07D-47A9-8DEE-D19F5FA65F63,BBA5CD5D-F07D-47A9-8DEE-D19F5FA65F63
This can be done in WebKit-based browsers (such as Chrome and Safari) with only CSS:
::-webkit-scrollbar {
width: 2em;
height: 2em
}
::-webkit-scrollbar-button {
background: #ccc
}
::-webkit-scrollbar-track-piece {
background: #888
}
::-webkit-scrollbar-thumb {
background: #eee
}?
References:
If you are using Oracle 10g, you can use the DECODE
function to pivot the rows into columns:
CREATE TABLE doc_tab (
loan_number VARCHAR2(20),
document_type VARCHAR2(20),
document_id VARCHAR2(20)
);
INSERT INTO doc_tab VALUES('992452533663', 'Voters ID', 'XPD0355636');
INSERT INTO doc_tab VALUES('992452533663', 'Pan card', 'CHXPS5522D');
INSERT INTO doc_tab VALUES('992452533663', 'Drivers licence', 'DL-0420110141769');
COMMIT;
SELECT
loan_number,
MAX(DECODE(document_type, 'Voters ID', document_id)) AS voters_id,
MAX(DECODE(document_type, 'Pan card', document_id)) AS pan_card,
MAX(DECODE(document_type, 'Drivers licence', document_id)) AS drivers_licence
FROM
doc_tab
GROUP BY loan_number
ORDER BY loan_number;
Output:
LOAN_NUMBER VOTERS_ID PAN_CARD DRIVERS_LICENCE ------------- -------------------- -------------------- -------------------- 992452533663 XPD0355636 CHXPS5522D DL-0420110141769
You can achieve the same using Oracle PIVOT
clause, introduced in 11g:
SELECT *
FROM doc_tab
PIVOT (
MAX(document_id) FOR document_type IN ('Voters ID','Pan card','Drivers licence')
);
SQLFiddle example with both solutions: SQLFiddle example
Read more about pivoting here: Pivot In Oracle by Tim Hall
Update for Brian's answer:
If a function in Python 3 has keyword-only arguments, then you need to use inspect.getfullargspec
:
def yay(a, b=10, *, c=20, d=30):
pass
inspect.getfullargspec(yay)
yields this:
FullArgSpec(args=['a', 'b'], varargs=None, varkw=None, defaults=(10,), kwonlyargs=['c', 'd'], kwonlydefaults={'c': 20, 'd': 30}, annotations={})
From the itertools
recipes:
from itertools import tee
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iterable)
next(b, None)
return zip(a, b)
for v, w in pairwise(a):
...
It is true that, for 64-bit IEEE754 double, all integers up to 9007199254740992 == 2^53 can be exactly represented.
However, it is also worth mentioning that all representable numbers beyond 4503599627370496 == 2^52 are integers. Beyond 2^52 it becomes meaningless to test whether or not they are integers, because they are all implicitly rounded to a nearby representable value.
In the range 2^51 to 2^52, the only non-integer values are the midpoints ending with ".5", meaning any integer test after a calculation must be expected to yield at least 50% false answers.
Below 2^51 we also have ".25" and ".75", so comparing a number with its rounded counterpart in order to determine if it may be integer or not starts making some sense.
TLDR: If you want to test whether a calculated result may be integer, avoid numbers larger than 2251799813685248 == 2^51
One needs to understand what a JOIN or a series of JOINs does to a set of data. With strae's post, a pe_id of 1 joined with corresponding order and items on pe_id = 1 will give you the following data to "select" from:
[ table people portion ] [ table orders portion ] [ table items portion ]
| people.pe_id | people.pe_name | orders.ord_id | orders.pe_id | orders.ord_title | item.item_id | item.ord_id | item.pe_id | item.title |
| 1 | Foo | 1 | 1 | First order | 1 | 1 | 1 | Apple |
| 1 | Foo | 1 | 1 | First order | 2 | 1 | 1 | Pear |
The joins essentially come up with a cartesian product of all the tables. You basically have that data set to select from and that's why you need a distinct count on orders.ord_id and items.item_id. Otherwise both counts will result in 2 - because you effectively have 2 rows to select from.
At the moment, referencing the Google Android Map API v2 you can't run Google Maps v2 on the Android emulator; you must use a device for your tests.
create host file = manifest.json
html tag head
<link rel="manifest" href="/manifest.json">
file
manifest.json
{
"name": "news",
"short_name": "news",
"description": "des news application day",
"categories": [
"news",
"business"
],
"theme_color": "#ffffff",
"background_color": "#ffffff",
"display": "standalone",
"orientation": "natural",
"lang": "fa",
"dir": "rtl",
"start_url": "/?application=true",
"gcm_sender_id": "482941778795",
"DO_NOT_CHANGE_GCM_SENDER_ID": "Do not change the GCM Sender ID",
"icons": [
{
"src": "https://s100.divarcdn.com/static/thewall-assets/android-chrome-192x192.png",
"sizes": "192x192",
"type": "image/png"
},
{
"src": "https://s100.divarcdn.com/static/thewall-assets/android-chrome-512x512.png",
"sizes": "512x512",
"type": "image/png"
}
],
"related_applications": [
{
"platform": "play",
"url": "https://play.google.com/store/apps/details?id=ir.divar"
}
],
"prefer_related_applications": true
}
Assuming employee numbers are in the first column and their names are in the second:
=VLOOKUP(A1, Sheet2!A:B, 2,false)
Before you use scan
, make sure that the requested page's Content-Type
header is text/html
, since there can be links to things like images which are not encoded in UTF-8. The page could also be non-html if you picked up a href
in something like a <link>
element. How to check this varies on what HTTP library you are using. Then, make sure the result is only ascii with String#ascii_only?
(not UTF-8 because HTML is only supposed to be using ascii, entities can be used otherwise). If both of those tests pass, it is safe to use scan
.
Demo: http://jsfiddle.net/sunnycpp/u4vjR/2/
Here I have created handle-destroy directive.
ctrl.directive('handleDestroy', function() {
return function(scope, tElement, attributes) {
scope.$on('$destroy', function() {
alert("In destroy of:" + scope.todo.text);
});
};
});
Just type ctrl+shift+I in google chrome & you will land in an isolated developer window.
Okay so three big things I noticed
You need to include the header file in your class file
Never, EVER place a using directive inside of a header or class, rather do something like std::cout << "say stuff";
Structs are completely defined within a header, structs are essentially classes that default to public
Hope this helps!
The following appears to work for 3.7+, using standard libs:
from datetime import timedelta
from datetime import timezone
import time
def currenttz():
if time.daylight:
return timezone(timedelta(seconds=-time.altzone),time.tzname[1])
else:
return timezone(timedelta(seconds=-time.timezone),time.tzname[0])
Be Careful, Follow these steps and save your time
Right Click on your Project Explorer.
Select New-> Project -> Android Application Project from Existing Code
Browse upto this path only - "C:\Users**your path**\Local\Android\android-sdk\extras\google\google_play_services"
Be careful brose only upto - google_play_services and not upto google_play_services_lib
And this way you are able to import the google play service lib.
Let me know if you have any queries regarding the same.
Thanks
See http://www.w3schools.com/tags/att_input_accept.asp:
The accept attribute is supported in all major browsers, except Internet Explorer and Safari. Definition and Usage
The accept attribute specifies the types of files that the server accepts (that can be submitted through a file upload).
Note: The accept attribute can only be used with
<input type="file">
.Tip: Do not use this attribute as a validation tool. File uploads should be validated on the server.
Syntax
<input accept="audio/*|video/*|image/*|MIME_type" />
Tip: To specify more than one value, separate the values with a comma (e.g.
<input accept="audio/*,video/*,image/*" />
.
You might try a queuing system like Resque. You then can generate a job, that processes the information and quite fast return with the "processing" image. With this approach you won't know when it is finished though.
This solution is intended for larger scale applications, where you don't want your front machines to do the heavy lifting, so they can process user requests. Therefore it might or might not work with physical data like files and folders, but for processing more complicated logic or other asynchronous tasks (ie new registrations mails) it is nice to have and very scalable.
Npm repository is currently down. See issue #2694 on npm github
EDIT:
To use a mirror in the meanwhile:
npm set registry http://ec2-46-137-149-160.eu-west-1.compute.amazonaws.com
you can reset this later with:
npm set registry https://registry.npmjs.org/
You can make your code simpler and more readable like this
bad_words = ['bad', 'naughty']
with open('oldfile.txt') as oldfile, open('newfile.txt', 'w') as newfile:
for line in oldfile:
if not any(bad_word in line for bad_word in bad_words):
newfile.write(line)
using a Context Manager and any.
you can use optional variable by ?
or if you have multiple optional variable by ...
, example:
function details(name: string, country="CA", address?: string, ...hobbies: string) {
// ...
}
In the above:
name
is requiredcountry
is required and has a default valueaddress
is optionalhobbies
is an array of optional paramsIf you would want to change the logging level of all the loggers use the below method. This will enumerate over all the loggers and change the logging level to given level. Please make sure that you DO NOT have log4j.appender.loggerName.Threshold=DEBUG
property set in your log4j.properties
file.
public static void changeLogLevel(Level level) {
Enumeration<?> loggers = LogManager.getCurrentLoggers();
while(loggers.hasMoreElements()) {
Logger logger = (Logger) loggers.nextElement();
logger.setLevel(level);
}
}
If this happens to you on mac (as it did to me):
CSV (MS-DOS Comma-Separated)
Run the following script
with open(csv_filename, 'rU') as csvfile:
csvreader = csv.reader(csvfile)
for row in csvreader:
print ', '.join(row)
I had to do the same here and the above works except I had to provide login details as it was a local docker repository.
It is as per the above but with supplying the username/password in the URL.
curl -k -X GET https://yourusername:yourpassword@theregistryURL/v2/_catalog
It comes back as unformatted JSON.
I piped it through the python formatter for ease of human reading, in case you would like to have it in this format.
curl -k -X GET https://yourusername:yourpassword@theregistryURL/v2/_catalog | python -m json.tool
I ran into the same thing with the Bing Map API. URLEncoder just made things worse, but a replaceAll(" ","%20");
did the trick.
Just in case someone else stumbles on this. I was running someone else's code - make sure they are not handling the signal, so they can gracefully exit. I commented out the handling, and got the core dump.
Doubles don't have decimal places - they're not based on decimal digits to start with. You could get "the closest double to the current value when truncated to three decimal digits", but it still wouldn't be exactly the same. You'd be better off using decimal
.
Having said that, if it's only the way that rounding happens that's a problem, you can use Math.Truncate(value * 1000) / 1000;
which may do what you want. (You don't want rounding at all, by the sounds of it.) It's still potentially "dodgy" though, as the result still won't really just have three decimal places. If you did the same thing with a decimal value, however, it would work:
decimal m = 12.878999m;
m = Math.Truncate(m * 1000m) / 1000m;
Console.WriteLine(m); // 12.878
EDIT: As LBushkin pointed out, you should be clear between truncating for display purposes (which can usually be done in a format specifier) and truncating for further calculations (in which case the above should work).
items[node.ind] = items[node.ind]._replace(v=node.v)
(Note: Don't be discouraged to use this solution because of the leading underscore in the function _replace. Specifically for namedtuple some functions have leading underscore which is not for indicating they are meant to be "private")
Look at the sample from Google called Lunar Lander, the ship image there is rotated dynamically.
>>> x = "2342.34"
>>> float(x)
2342.3400000000001
There you go. Use float (which behaves like and has the same precision as a C,C++, or Java double).
On IPython 2.2.0, just typing l (lowercase L) on command mode (activated by typing Esc) works. See [Help] - [Keyboard Shortcuts] for other shortcuts.
Also, you can set default behavior to display line numbers by editing custom.js
.
EXECUTE [or EXEC] procedure_name
@parameter_1_Name = 'parameter_1_Value',
@parameter_2_name = 'parameter_2_value',
@parameter_z_name = 'parameter_z_value'
Double is a wrapper class,
The Double class wraps a value of the primitive type double in an object. An object of type Double contains a single field whose type is double.
In addition, this class provides several methods for converting a double to a String and a String to a double, as well as other constants and methods useful when dealing with a double.
The double data type,
The double data type is a double-precision 64-bit IEEE 754 floating point. Its range of values is 4.94065645841246544e-324d to 1.79769313486231570e+308d (positive or negative). For decimal values, this data type is generally the default choice. As mentioned above, this data type should never be used for precise values, such as currency.
Check each datatype with their ranges : Java's Primitive Data Types.
Important Note : If you'r thinking to use double for precise values, you need to re-think before using it. Java Traps: double
Maybe you're looking for something like this:
$(document).click(function(e)
{
if($(e.srcElement).attr('id')=='id')
{
alert('click on #id');
}
else
{
alert('click on something else');
}
});
You may retrieve a pointer to the clicked element using event.srcElement
.
So all you have to do is to check the id-attribute of the clicked element.
In case someone cares just about the error nothing added to commit but untracked files present (use "git add" to track)
and not about Please move or remove them before you can merge.
. You might have a look at the answers on Git - Won't add files?
There you find at least 2 good candidates for the issue in question here: that you either are in a subfolder or in a parent folder, but not in the actual repo folder. If you are in the directory one level too high, this will definitely raise that message "nothing added to commit…", see my answer in the link for details. I do not know if the same message occurs when you are in a subfolder, but it is likely. That could fit to your explanations.
builder.setCancelable(false);
public void Mensaje(View v){
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setTitle("¿Quieres ir a el Menú principal?");
builder.setMessage("Al presionar SI iras a el menú y saldras de la materia.");
builder.setPositiveButton("SI", null);
builder.setNegativeButton("NO", null);
builder.setCancelable(false);
builder.show();
}
Recursion in Python works just as recursion in an other language, with the recursive construct defined in terms of itself:
For example a recursive class could be a binary tree (or any tree):
class tree():
def __init__(self):
'''Initialise the tree'''
self.Data = None
self.Count = 0
self.LeftSubtree = None
self.RightSubtree = None
def Insert(self, data):
'''Add an item of data to the tree'''
if self.Data == None:
self.Data = data
self.Count += 1
elif data < self.Data:
if self.LeftSubtree == None:
# tree is a recurive class definition
self.LeftSubtree = tree()
# Insert is a recursive function
self.LeftSubtree.Insert(data)
elif data == self.Data:
self.Count += 1
elif data > self.Data:
if self.RightSubtree == None:
self.RightSubtree = tree()
self.RightSubtree.Insert(data)
if __name__ == '__main__':
T = tree()
# The root node
T.Insert('b')
# Will be put into the left subtree
T.Insert('a')
# Will be put into the right subtree
T.Insert('c')
As already mentioned a recursive structure must have a termination condition. In this class, it is not so obvious because it only recurses if new elements are added, and only does it a single time extra.
Also worth noting, python by default has a limit to the depth of recursion available, to avoid absorbing all of the computer's memory. On my computer this is 1000. I don't know if this changes depending on hardware, etc. To see yours :
import sys
sys.getrecursionlimit()
and to set it :
import sys #(if you haven't already)
sys.setrecursionlimit()
edit: I can't guarentee that my binary tree is the most efficient design ever. If anyone can improve it, I'd be happy to hear how
Another crucial method not mentioned here is using the same TCP connection for multiple HTTP requests, and exactly one curl command for this.
This is very useful to save network bandwidth, client and server resources, and overall the need of using multiple curl commands, as curl by default closes the connection when end of command is reached.
Keeping the connection open and reusing it is very common for standard clients running a web-app.
Starting curl version 7.36.0, the --next
or -:
command-line option allows to chain multiple requests, and usable both in command-line and scripting.
For example:
curl http://example.com/?update_=1 -: http://example.com/foo
curl http://example.com/?update_=1 -: -d "I am posting this string" http://example.com/?update_=2
curl -o 'my_output_file' http://example.com/?update_=1 -: -d "my_data" -s -m 10 http://example.com/foo -: -o /dev/null http://example.com/random
From the curl manpage:
-:, --next
Tells curl to use a separate operation for the following URL and associated options. This allows you to send several URL requests, each with their own specific options, for example, such as different user names or custom requests for each.
-:
,--next
will reset all local options and only global ones will have their values survive over to the operation following the -:, --next instruction. Global options include -v, --verbose, --trace, --trace-ascii and --fail-early.For example, you can do both a GET and a POST in a single command line:
curl www1.example.com --next -d postthis www2.example.com
Added in 7.36.0.
<asp:Button ID="btnGet" runat="server" Text="Get" OnClick="btnGet_Click" OnClientClick="retun callMethod();" />
<script type="text/javascript">
function callMethod() {
//your logic should be here and make sure your logic code note returing function
return false;
}
</script>
My suggestion to you is to write your own utility service. You can include the service in each controller or create a parent controller, assign the utility service to your scope and then every child controller will inherit this without you having to include it.
Example: http://plnkr.co/edit/NI7V9cLkQmEtWO36CPXy?p=preview
var app = angular.module('plunker', []);
app.controller('MainCtrl', function($scope, Utils) {
$scope.utils = Utils;
});
app.controller('ChildCtrl', function($scope, Utils) {
$scope.undefined1 = Utils.isUndefinedOrNull(1); // standard DI
$scope.undefined2 = $scope.utils.isUndefinedOrNull(1); // MainCtrl is parent
});
app.factory('Utils', function() {
var service = {
isUndefinedOrNull: function(obj) {
return !angular.isDefined(obj) || obj===null;
}
}
return service;
});
Or you could add it to the rootScope as well. Just a few options for extending angular with your own utility functions.
To have it in one line, this is what I did:
if any(([True if 'MICHAEL89' in username.upper() else False for username in USERNAMES])):
print('username exists in list')
I didn't test it time-wise though. I am not sure how fast/efficient it is.
The best method I know of is to use a Perceptual Hash. There appears to be a good open source implementation of such a hash available at:
The main idea is that each image is reduced down to a small hash code or 'fingerprint' by identifying salient features in the original image file and hashing a compact representation of those features (rather than hashing the image data directly). This means that the false positives rate is much reduced over a simplistic approach such as reducing images down to a tiny thumbprint sized image and comparing thumbprints.
phash offers several types of hash and can be used for images, audio or video.
You need to be more explicit and add the start and end times as well, down to the milliseconds:
select *
from xxx
where dates between '2012-10-26 00:00:00.000' and '2012-10-27 23:59:59.997'
The database can very well interpret '2012-10-27'
as '2012-10-27 00:00:00.000'
.
Postgres hasn't implemented an equivalent to INSERT OR REPLACE
. From the ON CONFLICT
docs (emphasis mine):
It can be either DO NOTHING, or a DO UPDATE clause specifying the exact details of the UPDATE action to be performed in case of a conflict.
Though it doesn't give you shorthand for replacement, ON CONFLICT DO UPDATE
applies more generally, since it lets you set new values based on preexisting data. For example:
INSERT INTO users (id, level)
VALUES (1, 0)
ON CONFLICT (id) DO UPDATE
SET level = users.level + 1;
This is what I use in my application:
static void Main()
{
bool mutexCreated = false;
System.Threading.Mutex mutex = new System.Threading.Mutex( true, @"Local\slimCODE.slimKEYS.exe", out mutexCreated );
if( !mutexCreated )
{
if( MessageBox.Show(
"slimKEYS is already running. Hotkeys cannot be shared between different instances. Are you sure you wish to run this second instance?",
"slimKEYS already running",
MessageBoxButtons.YesNo,
MessageBoxIcon.Question ) != DialogResult.Yes )
{
mutex.Close();
return;
}
}
// The usual stuff with Application.Run()
mutex.Close();
}
You can also simply surround the chart with container (according to official doc http://www.chartjs.org/docs/latest/general/responsive.html#important-note)
<div class="chart-container">
<canvas id="myCanvas"></canvas>
</div>
CSS
.chart-container {
width: 1000px;
height:600px
}
and with options
responsive:true
maintainAspectRatio: false
var xInt = parseInt(x)
This will return either the integer
value, or NaN
.
Read more about parseInt here.
Another approach that is especially useful if you want to store data coming from an external API or a DB would be this:
Create a class that represent your data model
export class Data{
private id:number;
private text: string;
constructor(id,text) {
this.id = id;
this.text = text;
}
In your component class you create an empty array of type Data
and populate this array whenever you get a response from API or whatever data source you are using
export class AppComponent {
private search_key: string;
private dataList: Data[] = [];
getWikiData() {
this.httpService.getDataFromAPI()
.subscribe(data => {
this.parseData(data);
});
}
parseData(jsonData: string) {
//considering you get your data in json arrays
for (let i = 0; i < jsonData[1].length; i++) {
const data = new WikiData(jsonData[1][i], jsonData[2][i]);
this.wikiData.push(data);
}
}
}
This errors have two common causes: The element has been deleted entirely, or the element is no longer attached to the DOM.
If you already checked if it is not your case, you could be facing the same problem as me.
The element in the DOM is not found because your page is not entirely loaded when Selenium is searching for the element. To solve that, you can put an explicit wait condition that tells Selenium to wait until the element is available to be clicked on.
from selenium.webdriver.support import expected_conditions as EC
wait = WebDriverWait(driver, 10)
element = wait.until(EC.element_to_be_clickable((By.ID, 'someid')))
set ONE column's width to any value, i.e. width="*"
Based on @tetsuo answer, with java 8 :
Integer i = ...
switch (Optional.ofNullable(i).orElse(DEFAULT_VALUE)) {
case DEFAULT_VALUE:
doDefault();
break;
}
This is my working code. Thanks to Paulo for the links. You create a custom error handler and call the trigger_error
function with the correct $errno
exception, even if it's not an error. Make sure you can write to the log file directory without administrator access.
<?php
$logfile_dir = "C:\workspace\logs\\"; // or "/var/log/" for Linux
$logfile = $logfile_dir . "php_" . date("y-m-d") . ".log";
$logfile_delete_days = 30;
function error_handler($errno, $errstr, $errfile, $errline)
{
global $logfile_dir, $logfile, $logfile_delete_days;
if (!(error_reporting() & $errno)) {
// This error code is not included in error_reporting, so let it fall
// through to the standard PHP error handler
return false;
}
$filename = basename($errfile);
switch ($errno) {
case E_USER_ERROR:
file_put_contents($logfile, date("y-m-d H:i:s.").gettimeofday()["usec"] . " $filename ($errline): " . "ERROR >> message = [$errno] $errstr\n", FILE_APPEND | LOCK_EX);
exit(1);
break;
case E_USER_WARNING:
file_put_contents($logfile, date("y-m-d H:i:s.").gettimeofday()["usec"] . " $filename ($errline): " . "WARNING >> message = $errstr\n", FILE_APPEND | LOCK_EX);
break;
case E_USER_NOTICE:
file_put_contents($logfile, date("y-m-d H:i:s.").gettimeofday()["usec"] . " $filename ($errline): " . "NOTICE >> message = $errstr\n", FILE_APPEND | LOCK_EX);
break;
default:
file_put_contents($logfile, date("y-m-d H:i:s.").gettimeofday()["usec"] . " $filename ($errline): " . "UNKNOWN >> message = $errstr\n", FILE_APPEND | LOCK_EX);
break;
}
// delete any files older than 30 days
$files = glob($logfile_dir . "*");
$now = time();
foreach ($files as $file)
if (is_file($file))
if ($now - filemtime($file) >= 60 * 60 * 24 * $logfile_delete_days)
unlink($file);
return true; // Don't execute PHP internal error handler
}
set_error_handler("error_handler");
trigger_error("testing 1,2,3", E_USER_NOTICE);
?>
If you're using Swift 2, now you can only use print() to write something to the output.
Apple has combined both println() and print() functions into one.
Updated to iOS 9
By default, the function terminates the line it prints by adding a line break.
print("Hello Swift")
Terminator
To print a value without a line break after it, pass an empty string as the terminator
print("Hello Swift", terminator: "")
Separator
You now can use separator to concatenate multiple items
print("Hello", "Swift", 2, separator:" ")
Both
Or you could combine using in this way
print("Hello", "Swift", 2, separator:" ", terminator:".")
I was frustrated by the lack of a fast, high quality way to marshal JSON to a colorized string in Go so I wrote my own Marshaller called ColorJSON.
With it, you can easily produce output like this using very little code:
package main
import (
"fmt"
"encoding/json"
"github.com/TylerBrock/colorjson"
)
func main() {
str := `{
"str": "foo",
"num": 100,
"bool": false,
"null": null,
"array": ["foo", "bar", "baz"],
"obj": { "a": 1, "b": 2 }
}`
var obj map[string]interface{}
json.Unmarshal([]byte(str), &obj)
// Make a custom formatter with indent set
f := colorjson.NewFormatter()
f.Indent = 4
// Marshall the Colorized JSON
s, _ := f.Marshal(obj)
fmt.Println(string(s))
}
I'm writing the documentation for it now but I was excited to share my solution.
What you have to do is something like this:
<Button Name="MyFlatImageButton"
Background="Transparent"
BorderBrush="Transparent"
BorderThickness="0"
Padding="-4">
<Image Source="MyImage.png"/>
</Button>
Hope this is what you were looking for.
Edit: Sorry, forgot to mention that if you want to see the button-border when you hover over the image, all you have to do is skip the Padding="-4".
In most cases you have to initialize the array,
let list: number[] = [];
NumPy's lack of a particular domain-specific function is perhaps due to the Core Team's discipline and fidelity to NumPy's prime directive: provide an N-dimensional array type, as well as functions for creating, and indexing those arrays. Like many foundational objectives, this one is not small, and NumPy does it brilliantly.
The (much) larger SciPy contains a much larger collection of domain-specific libraries (called subpackages by SciPy devs)--for instance, numerical optimization (optimize), signal processsing (signal), and integral calculus (integrate).
My guess is that the function you are after is in at least one of the SciPy subpackages (scipy.signal perhaps); however, i would look first in the collection of SciPy scikits, identify the relevant scikit(s) and look for the function of interest there.
Scikits are independently developed packages based on NumPy/SciPy and directed to a particular technical discipline (e.g., scikits-image, scikits-learn, etc.) Several of these were (in particular, the awesome OpenOpt for numerical optimization) were highly regarded, mature projects long before choosing to reside under the relatively new scikits rubric. The Scikits homepage liked to above lists about 30 such scikits, though at least several of those are no longer under active development.
Following this advice would lead you to scikits-timeseries; however, that package is no longer under active development; In effect, Pandas has become, AFAIK, the de facto NumPy-based time series library.
Pandas has several functions that can be used to calculate a moving average; the simplest of these is probably rolling_mean, which you use like so:
>>> # the recommended syntax to import pandas
>>> import pandas as PD
>>> import numpy as NP
>>> # prepare some fake data:
>>> # the date-time indices:
>>> t = PD.date_range('1/1/2010', '12/31/2012', freq='D')
>>> # the data:
>>> x = NP.arange(0, t.shape[0])
>>> # combine the data & index into a Pandas 'Series' object
>>> D = PD.Series(x, t)
Now, just call the function rolling_mean passing in the Series object and a window size, which in my example below is 10 days.
>>> d_mva = PD.rolling_mean(D, 10)
>>> # d_mva is the same size as the original Series
>>> d_mva.shape
(1096,)
>>> # though obviously the first w values are NaN where w is the window size
>>> d_mva[:3]
2010-01-01 NaN
2010-01-02 NaN
2010-01-03 NaN
verify that it worked--e.g., compared values 10 - 15 in the original series versus the new Series smoothed with rolling mean
>>> D[10:15]
2010-01-11 2.041076
2010-01-12 2.041076
2010-01-13 2.720585
2010-01-14 2.720585
2010-01-15 3.656987
Freq: D
>>> d_mva[10:20]
2010-01-11 3.131125
2010-01-12 3.035232
2010-01-13 2.923144
2010-01-14 2.811055
2010-01-15 2.785824
Freq: D
The function rolling_mean, along with about a dozen or so other function are informally grouped in the Pandas documentation under the rubric moving window functions; a second, related group of functions in Pandas is referred to as exponentially-weighted functions (e.g., ewma, which calculates exponentially moving weighted average). The fact that this second group is not included in the first (moving window functions) is perhaps because the exponentially-weighted transforms don't rely on a fixed-length window
Just use That.Its Easy.
<script language="javascript" type="text/javascript">
var images = new Array()
images[0] = "img1.jpg";
images[1] = "img2.jpg";
images[2] = "img3.jpg";
setInterval("changeImage()", 30000);
var x=0;
function changeImage()
{
document.getElementById("img").src=images[x]
x++;
if (images.length == x)
{
x = 0;
}
}
</script>
And in Body Write this Code:-
<img id="img" src="imgstart.jpg">
This is because of a higher JDK during compile time and lower JDK during runtime. So you just need to update your JDK version, possible to JDK 7
You may also check Unsupported major.minor version 51.0
Well, actually, React is not suitable for calling child methods from the parent. Some frameworks, like Cycle.js, allow easily access data both from parent and child, and react to it.
Also, there is a good chance you don't really need it. Consider calling it into existing component, it is much more independent solution. But sometimes you still need it, and then you have few choices:
UPD: if you need to share some functionality which doesn't involve any state (like static functions in OOP), then there is no need to contain it inside components. Just declare it separately and invoke when need:
let counter = 0;
function handleInstantiate() {
counter++;
}
constructor(props) {
super(props);
handleInstantiate();
}
I used this code and it works just fine
const int margin = 5;
Rectangle rect = new Rectangle(
Screen.PrimaryScreen.WorkingArea.X + margin,
Screen.PrimaryScreen.WorkingArea.Y + margin,
Screen.PrimaryScreen.WorkingArea.Width - 2 * margin,
Screen.PrimaryScreen.WorkingArea.Height - 2 * (margin - 7));
this.Bounds = rect;
Simple and 100% working trick
(Make sure you install cmake)
My Anaconda python ver : 3.6.8 (64 bit) | OS :Windows 10
python -m pip install https://files.pythonhosted.org/packages/0e/ce/f8a3cff33ac03a8219768f0694c5d703c8e037e6aba2e865f9bae22ed63c/dlib-19.8.1-cp36-cp36m-win_amd64.whl#sha256=794994fa2c54e7776659fddb148363a5556468a6d5d46be8dad311722d54bfcf
This is an alternative description - just in case someone got stuck with the description above. Follow the steps:
apktool.bat
(or apktool
for Linux) and apktool_<version>.jar
from http://ibotpeaches.github.io/Apktool/install/ apktool.jar
and put both files in the same foldercmd.exe
) and change into that folder; verify that a Java Environment is installed (for Linux check the notes regarding required libraries as well)apktool decode [apk file]
Intermediate result: resource files, AndroidManifest.xml
Intermediate result: classes.dex
dex2jar-0.0.9.15.zip
from http://code.google.com/p/dex2jar/downloads/detail?name=dex2jar-0.0.9.15.zip&can=2&q=classes.dex
onto dex2jar.bat
(or enter <path_to>\dex2jar.bat classes.dex
in a DOS box; for Linux use dex2jar.sh
)
Intermediate result: classes_dex2jar.jar
classes_dex2jar.jar
(might be optional depending on used decompiler)Result: source code
Note: it is not allowed to decompile third party packages; this guide is intended to recover personal source code from an APK file only; finally, the resulting code will most likely be obfuscated
Use %in%
as follows
A$C %in% B$C
Which will tell you which values of column C of A are in B.
What is returned is a logical vector. In the specific case of your example, you get:
A$C %in% B$C
# [1] TRUE FALSE TRUE TRUE
Which you can use as an index to the rows of A
or as an index to A$C
to get the actual values:
# as a row index
A[A$C %in% B$C, ] # note the comma to indicate we are indexing rows
# as an index to A$C
A$C[A$C %in% B$C]
[1] 1 3 4 # returns all values of A$C that are in B$C
We can negate it too:
A$C[!A$C %in% B$C]
[1] 2 # returns all values of A$C that are NOT in B$C
2 %in% B$C # "is the value 2 in B$C ?"
# FALSE
A$C[2] %in% B$C # "is the 2nd element of A$C in B$C ?"
# FALSE
The above method for Visual Studio didn't seem to apply to Visual Studio 2013, but I was able to find the described checkbox using the Project Menu and selecting my project (probably the last item on the submenu) to get to the dialog with the checkbox (on the Build tab).
Uncommenting set bell-style none
in /etc/inputrc
and creating a .bash_profile
with setterm -blength 0
didn't stop vim from beeping.
What worked for me was creating a .vimrc
file in my home directory with set visualbell
.
Source: https://linuxconfig.org/turn-off-beep-bell-on-linux-terminal
function tryMe(param1, param2) {
console.log(param1 + " and " + param2);
}
function tryMe2(param1) {
console.log(param1);
}
function callbackTester(callback, ...params) {
callback(...params);
}
callbackTester(tryMe, "hello", "goodbye");
callbackTester(tryMe2, "hello");
read more about the spread syntax
There is no support for array in sql server but there are several ways by which you can pass collection to a stored proc .
The below link may help you
You could do it in two lines like this:
now = datetime.now()
last_month = datetime(now.year, now.month - 1, now.day)
remember the imports
from datetime import datetime
To find a value and select it:
private void selectValue(Spinner spinner, Object value) {
for (int i = 0; i < spinner.getCount(); i++) {
if (spinner.getItemAtPosition(i).equals(value)) {
spinner.setSelection(i);
break;
}
}
}
I just think of Rebuild as performing the Clean first followed by the Build. Perhaps I am wrong ... comments?
The main differences are:
1) OFFLINE index rebuild is faster than ONLINE rebuild.
2) Extra disk space required during SQL Server online index rebuilds.
3) SQL Server locks acquired with SQL Server online index rebuilds.